diff --git "a/test.csv" "b/test.csv" deleted file mode 100644--- "a/test.csv" +++ /dev/null @@ -1,28 +0,0 @@ -summary_id,paper_id,source_sid,target_sid,source_text,target_text,target_doc,strategy -C00-2123,C00-2123,6,39,"In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.","In order to handle the necessary word reordering as an optimization problem within our dynamic programming approach, we describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming (Held, Karp, 1962).","['Word Re-ordering and DP-based Search in Statistical Machine Translation', 'In this paper, we describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).', 'Starting from a DP-based solution to the traveling salesman problem, we present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃ\x86cient search algorithm.', 'A search restriction especially useful for the translation direction from German to English is presented.', 'The experimental tests are carried out on the Verbmobil task (GermanEnglish, 8000-word vocabulary), which is a limited-domain spoken-language task.', 'The goal of machine translation is the translation of a text given in some source language into a target language.', 'We are given a source string fJ 1 = f1:::fj :::fJ of length J, which is to be translated into a target string eI 1 = e1:::ei:::eI of length I. Among all possible target strings, we will choose the string with the highest probability: ^eI 1 = arg max eI 1 fPr(eI 1jfJ 1 )g = arg max eI 1 fPr(eI 1) Pr(fJ 1 jeI 1)g : (1) The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language.', 'Pr(eI 1) is the language model of the target language, whereas Pr(fJ 1 jeI1) is the transla tion model.', 'Our approach uses word-to-word dependencies between source and target words.', 'The model is often further restricted so that each source word is assigned to exactly one target word (Brown et al., 1993; Ney et al., 2000).', 'These alignment models are similar to the concept of hidden Markov models (HMM) in speech recognition.', 'The alignment mapping is j ! i = aj from source position j to target position i = aj . The use of this alignment model raises major problems if a source word has to be aligned to several target words, e.g. when translating German compound nouns.', 'A simple extension will be used to handle this problem.', 'In Section 2, we brie y review our approach to statistical machine translation.', 'In Section 3, we introduce our novel concept to word reordering and a DP-based search, which is especially suitable for the translation direction from German to English.', 'This approach is compared to another reordering scheme presented in (Berger et al., 1996).', 'In Section 4, we present the performance measures used and give translation results on the Verbmobil task.', 'In this section, we brie y review our translation approach.', 'In Eq.', '(1), Pr(eI 1) is the language model, which is a trigram language model in this case.', 'For the translation model Pr(fJ 1 jeI 1), we go on the assumption that each source word is aligned to exactly one target word.', 'The alignment model uses two kinds of parameters: alignment probabilities p(aj jajô\x80\x80\x801; I; J), where the probability of alignment aj for position j depends on the previous alignment position ajô\x80\x80\x801 (Ney et al., 2000) and lexicon probabilities p(fj jeaj ).', 'When aligning the words in parallel texts (for language pairs like SpanishEnglish, French-English, ItalianGerman,...), we typically observe a strong localization effect.', 'In many cases, there is an even stronger restriction: over large portions of the source string, the alignment is monotone.', '2.1 Inverted Alignments.', 'To explicitly handle the word reordering between words in source and target language, we use the concept of the so-called inverted alignments as given in (Ney et al., 2000).', 'An inverted alignment is defined as follows: inverted alignment: i ! j = bi: Target positions i are mapped to source positions bi.', ""What is important and is not expressed by the notation is the so-called coverage constraint: each source position j should be 'hit' exactly once by the path of the inverted alignment bI 1 = b1:::bi:::bI . Using the inverted alignments in the maximum approximation, we obtain as search criterion: max I (p(JjI) max eI 1 ( I Yi=1 p(eijeiô\x80\x80\x801 iô\x80\x80\x802) max bI 1 I Yi=1 [p(bijbiô\x80\x80\x801; I; J) p(fbi jei)])) = = max I (p(JjI) max eI 1;bI 1 ( I Yi=1 p(eijeiô\x80\x80\x801 iô\x80\x80\x802) p(bijbiô\x80\x80\x801; I; J) p(fbi jei)])); where the two products over i have been merged into a single product over i. p(eijeiô\x80\x80\x801 iô\x80\x80\x802) is the trigram language model probability."", 'The inverted alignment probability p(bijbiô\x80\x80\x801; I; J) and the lexicon probability p(fbi jei) are obtained by relative frequency estimates from the Viterbi alignment path after the final training iteration.', 'The details are given in (Och and Ney, 2000).', 'The sentence length probability p(JjI) is omitted without any loss in performance.', 'For the inverted alignment probability p(bijbiô\x80\x80\x801; I; J), we drop the dependence on the target sentence length I. 2.2 Word Joining.', ""The baseline alignment model does not permit that a source word is aligned to two or more target words, e.g. for the translation direction from German toEnglish, the German compound noun 'Zahnarztter min' causes problems, because it must be translated by the two target words dentist's appointment."", 'We use a solution to this problem similar to the one presented in (Och et al., 1999), where target words are joined during training.', 'The word joining is done on the basis of a likelihood criterion.', 'An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.', ""E.g. when 'Zahnarzttermin' is aligned to dentist's, the extended lexicon model might learn that 'Zahnarzttermin' actuallyhas to be aligned to both dentist's and ap pointment."", 'In the following, we assume that this word joining has been carried out.', 'Machine Translation In this case my colleague can not visit on I n d i e s e m F a l l ka nn m e i n K o l l e g e a m the v i e r t e n M a i n i c h t b e s u c h e n S i e you fourth of May Figure 1: Reordering for the German verbgroup.', 'In order to handle the necessary word reordering as an optimization problem within our dynamic programming approach, we describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming (Held, Karp, 1962).', 'The traveling salesman problem is an optimization problem which is defined as follows: given are a set of cities S = s1; ; sn and for each pair of cities si; sj the cost dij > 0 for traveling from city si to city sj . We are looking for the shortest tour visiting all cities exactly once while starting and ending in city s1.', 'A straightforward way to find the shortest tour is by trying all possible permutations of the n cities.', 'The resulting algorithm has a complexity of O(n!).', 'However, dynamic programming can be used to find the shortest tour in exponential time, namely in O(n22n), using the algorithm by Held and Karp.', 'The approach recursively evaluates a quantity Q(C; j), where C is the set of already visited cities and sj is the last visited city.', 'Subsets C of increasing cardinality c are processed.', 'The algorithm works due to the fact that not all permutations of cities have to be considered explicitly.', 'For a given partial hypothesis (C; j), the order in which the cities in C have been visited can be ignored (except j), only the score for the best path reaching j has to be stored.', 'This algorithm can be applied to statistical machine translation.', 'Using the concept of inverted alignments, we explicitly take care of the coverage constraint by introducing a coverage set C of source sentence positions that have been already processed.', 'The advantage is that we can recombine search hypotheses by dynamic programming.', 'The cities of the traveling salesman problem correspond to source Table 1: DP algorithm for statistical machine translation.', 'input: source string f1:::fj :::fJ initialization for each cardinality c = 1; 2; ; J do for each pair (C; j), where j 2 C and jCj = c do for each target word e 2 E Qe0 (e; C; j) = p(fj je) max Ã\x86;e00 j02Cnfjg fp(jjj0; J) p(Ã\x86) pÃ\x86(eje0; e00) Qe00 (e0;C n fjg; j0)g words fj in the input string of length J. For the final translation each source position is considered exactly once.', 'Subsets of partial hypotheses with coverage sets C of increasing cardinality c are processed.', 'For a trigram language model, the partial hypotheses are of the form (e0; e; C; j).', 'e0; e are the last two target words, C is a coverage set for the already covered source positions and j is the last position visited.', 'Each distance in the traveling salesman problem now corresponds to the negative logarithm of the product of the translation, alignment and language model probabilities.', 'The following auxiliary quantity is defined: Qe0 (e; C; j) := probability of the best partial hypothesis (ei 1; bi 1), where C = fbkjk = 1; ; ig, bi = j, ei = e and eiô\x80\x80\x801 = e0.', 'The type of alignment we have considered so far requires the same length for source and target sentence, i.e. I = J. Evidently, this is an unrealistic assumption, therefore we extend the concept of inverted alignments as follows: When adding a new position to the coverage set C, we might generate either Ã\x86 = 0 or Ã\x86 = 1 new target words.', 'For Ã\x86 = 1, a new target language word is generated using the trigram language model p(eje0; e00).', 'For Ã\x86 = 0, no new target word is generated, while an additional source sentence position is covered.', 'A modified language model probability pÃ\x86(eje0; e00) is defined as follows: pÃ\x86(eje0; e00) = 1:0 if ��\x86 = 0 p(eje0; e00) if Ã\x86 = 1 : We associate a distribution p(Ã\x86) with the two cases Ã\x86 = 0 and Ã\x86 = 1 and set p(Ã\x86 = 1) = 0:7.', 'The above auxiliary quantity satisfies the following recursive DP equation: Qe0 (e; C; j) = Initial Skip Verb Final 1.', 'In.', '2.', 'diesem 3.', 'Fall.', '4.', 'mein 5.', 'Kollege.', '6.', 'kann 7.nicht 8.', 'besuchen 9.', 'Sie.', '10.', 'am 11.', 'vierten 12.', 'Mai.', '13.', 'Figure 2: Order in which source positions are visited for the example given in Fig.1.', '= p(fj je) max Ã\x86;e00 j02Cnfjg np(jjj0; J) p(Ã\x86) pÃ\x86(eje0; e00) Qe00 (e0;C n fjg; j 0 )o: The DP equation is evaluated recursively for each hypothesis (e0; e; C; j).', 'The resulting algorithm is depicted in Table 1.', 'The complexity of the algorithm is O(E3 J2 2J), where E is the size of the target language vocabulary.', '3.1 Word ReOrdering with Verbgroup.', 'Restrictions: Quasi-monotone Search The above search space is still too large to allow the translation of a medium length input sentence.', 'On the other hand, only very restricted reorderings are necessary, e.g. for the translation direction from Table 2: Coverage set hypothesis extensions for the IBM reordering.', 'No: Predecessor coverage set Successor coverage set 1 (f1; ;mg n flg ; l0) !', '(f1; ;mg ; l) 2 (f1; ;mg n fl; l1g ; l0) !', '(f1; ;mg n fl1g ; l) 3 (f1; ;mg n fl; l1; l2g ; l0) !', '(f1; ;mg n fl1; l2g ; l) 4 (f1; ;m ô\x80\x80\x80 1g n fl1; l2; l3g ; l0) !', '(f1; ;mg n fl1; l2; l3g ;m) German to English the monotonicity constraint is violated mainly with respect to the German verbgroup.', 'In German, the verbgroup usually consists of a left and a right verbal brace, whereas in English the words of the verbgroup usually form a sequence of consecutive words.', 'Our new approach, which is called quasi-monotone search, processes the source sentence monotonically, while explicitly taking into account the positions of the German verbgroup.', 'A typical situation is shown in Figure 1.', ""When translating the sentence monotonically from left to right, the translation of the German finite verb 'kann', which is the left verbal brace in this case, is postponed until the German noun phrase 'mein Kollege' is translated, which is the subject of the sentence."", ""Then, the German infinitive 'besuchen' and the negation particle 'nicht' are translated."", 'The translation of one position in the source sentence may be postponed for up to L = 3 source positions, and the translation of up to two source positions may be anticipated for at most R = 10 source positions.', 'To formalize the approach, we introduce four verbgroup states S: Initial (I): A contiguous, initial block of source positions is covered.', 'Skipped (K): The translation of up to one word may be postponed . Verb (V): The translation of up to two words may be anticipated.', 'Final (F): The rest of the sentence is processed monotonically taking account of the already covered positions.', 'While processing the source sentence monotonically, the initial state I is entered whenever there are no uncovered positions to the left of the rightmost covered position.', 'The sequence of states needed to carry out the word reordering example in Fig.', '1 is given in Fig.', '2.', 'The 13 positions of the source sentence are processed in the order shown.', 'A position is presented by the word at that position.', 'Using these states, we define partial hypothesis extensions, which are of the following type: (S0;C n fjg; j0) !', '(S; C; j); Not only the coverage set C and the positions j; j0, but also the verbgroup states S; S0 are taken into account.', 'To be short, we omit the target words e; e0 in the formulation of the search hypotheses.', 'There are 13 types of extensions needed to describe the verbgroup reordering.', 'The details are given in (Tillmann, 2000).', 'For each extension a new position is added to the coverage set.', 'Covering the first uncovered position in the source sentence, we use the language model probability p(ej$; $).', 'Here, $ is the sentence boundary symbol, which is thought to be at position 0 in the target sentence.', 'The search starts in the hypothesis (I; f;g; 0).', 'f;g denotes the empty set, where no source sentence position is covered.', 'The following recursive equation is evaluated: Qe0 (e; S; C; j) = (2) = p(fj je) max Ã\x86;e00 np(jjj0; J) p(Ã\x86) pÃ\x86(eje0; e00) max (S0;j0) (S0 ;Cnfjg;j0)!(S;C;j) j02Cnfjg Qe00 (e0; S0;C n fjg; j0)o: The search ends in the hypotheses (I; f1; ; Jg; j).', 'f1; ; Jg denotes a coverage set including all positions from the starting position 1 to position J and j 2 fJ ô\x80\x80\x80L; ; Jg.', 'The final score is obtained from: max e;e0 j2fJô\x80\x80\x80L;;Jg p($je; e0) Qe0 (e; I; f1; ; Jg; j); where p($je; e0) denotes the trigram language model, which predicts the sentence boundary $ at the end of the target sentence.', 'The complexity of the quasimonotone search is O(E3 J (R2+LR)).', 'The proof is given in (Tillmann, 2000).', '3.2 Reordering with IBM Style.', 'Restrictions We compare our new approach with the word reordering used in the IBM translation approach (Berger et al., 1996).', 'A detailed description of the search procedure used is given in this patent.', 'Source sentence words are aligned with hypothesized target sentence words, where the choice of a new source word, which has not been aligned with a target word yet, is restricted1.', 'A procedural definition to restrict1In the approach described in (Berger et al., 1996), a mor phological analysis is carried out and word morphemes rather than full-form words are used during the search.', 'Here, we process only full-form words within the translation procedure.', 'the number of permutations carried out for the word reordering is given.', 'During the search process, a partial hypothesis is extended by choosing a source sentence position, which has not been aligned with a target sentence position yet.', 'Only one of the first n positions which are not already aligned in a partial hypothesis may be chosen, where n is set to 4.', 'The restriction can be expressed in terms of the number of uncovered source sentence positions to the left of the rightmost position m in the coverage set.', 'This number must be less than or equal to n ô\x80\x80\x80 1.', 'Otherwise for the predecessor search hypothesis, we would have chosen a position that would not have been among the first n uncovered positions.', 'Ignoring the identity of the target language words e and e0, the possible partial hypothesis extensions due to the IBM restrictions are shown in Table 2.', 'In general, m; l; l0 6= fl1; l2; l3g and in line umber 3 and 4, l0 must be chosen not to violate the above reordering restriction.', 'Note that in line 4 the last visited position for the successor hypothesis must be m. Otherwise , there will be four uncovered positions for the predecessor hypothesis violating the restriction.', 'A dynamic programming recursion similar to the one in Eq. 2 is evaluated.', 'In this case, we have no finite-state restrictions for the search space.', 'The search starts in hypothesis (f;g; 0) and ends in the hypotheses (f1; ; Jg; j), with j 2 f1; ; Jg.', 'This approach leads to a search procedure with complexity O(E3 J4).', 'The proof is given in (Tillmann, 2000).', '4.1 The Task and the Corpus.', 'We have tested the translation system on the Verbmobil task (Wahlster 1993).', 'The Verbmobil task is an appointment scheduling task.', 'Two subjects are each given a calendar and they are asked to schedule a meeting.', 'The translation direction is from German to English.', 'A summary of the corpus used in the experiments is given in Table 3.', 'The perplexity for the trigram language model used is 26:5.', 'Although the ultimate goal of the Verbmobil project is the translation of spoken language, the input used for the translation experiments reported on in this paper is the (more or less) correct orthographic transcription of the spoken sentences.', 'Thus, the effects of spontaneous speech are present in the corpus, e.g. the syntactic structure of the sentence is rather less restricted, however the effect of speech recognition errors is not covered.', 'For the experiments, we use a simple preprocessing step.', 'German city names are replaced by category markers.', 'The translation search is carried out with the category markers and the city names are resubstituted into the target sentence as a postprocessing step.', 'Table 3: Training and test conditions for the Verbmobil task (*number of words without punctuation marks).', 'German English Training: Sentences 58 073 Words 519 523 549 921 Words* 418 979 453 632 Vocabulary Size 7939 4648 Singletons 3454 1699 Test-147: Sentences 147 Words 1 968 2 173 Perplexity { 26:5 Table 4: Multi-reference word error rate (mWER) and subjective sentence error rate (SSER) for three different search procedures.', 'Search CPU time mWER SSER Method [sec] [%] [%] MonS 0:9 42:0 30:5 QmS 10:6 34:4 23:8 IbmS 28:6 38:2 26:2 4.2 Performance Measures.', 'The following two error criteria are used in our experiments: mWER: multi-reference WER: We use the Levenshtein distance between the automatic translation and several reference translations as a measure of the translation errors.', 'On average, 6 reference translations per automatic translation are available.', 'The Levenshtein distance between the automatic translation and each of the reference translations is computed, and the minimum Levenshtein distance is taken.', 'This measure has the advantage of being completely automatic.', 'SSER: subjective sentence error rate: For a more detailed analysis, the translations are judged by a human test person.', 'For the error counts, a range from 0:0 to 1:0 is used.', 'An error count of 0:0 is assigned to a perfect translation, and an error count of 1:0 is assigned to a semantically and syntactically wrong translation.', '4.3 Translation Experiments.', 'For the translation experiments, Eq. 2 is recursively evaluated.', 'We apply a beam search concept as in speech recognition.', 'However there is no global pruning.', 'Search hypotheses are processed separately according to their coverage set C. The best scored hypothesis for each coverage set is computed: QBeam(C) = max e;e0 ;S;j Qe0 (e; S; C; j) The hypothesis (e0; e; S; C; j) is pruned if: Qe0 (e; S; C; j) < t0 QBeam(C); where t0 is a threshold to control the number of surviving hypotheses.', 'Additionally, for a given coverage set, at most 250 different hypotheses are kept during the search process, and the number of different words to be hypothesized by a source word is limited.', 'For each source word f, the list of its possible translations e is sorted according to p(fje) puni(e), where puni(e) is the unigram probability of the English word e. It is suÃ\x86cient to consider only the best 50 words.', 'We show translation results for three approaches: the monotone search (MonS), where no word reordering is allowed (Tillmann, 1997), the quasimonotone search (QmS) as presented in this paper and the IBM style (IbmS) search as described in Section 3.2.', 'Table 4 shows translation results for the three approaches.', 'The computing time is given in terms of CPU time per sentence (on a 450MHz PentiumIIIPC).', 'Here, the pruning threshold t0 = 10:0 is used.', 'Translation errors are reported in terms of multireference word error rate (mWER) and subjective sentence error rate (SSER).', 'The monotone search performs worst in terms of both error rates mWER and SSER.', 'The computing time is low, since no reordering is carried out.', 'The quasi-monotone search performs best in terms of both error rates mWER and SSER.', 'Additionally, it works about 3 times as fast as the IBM style search.', 'For our demonstration system, we typically use the pruning threshold t0 = 5:0 to speed up the search by a factor 5 while allowing for a small degradation in translation accuracy.', 'The effect of the pruning threshold t0 is shown in Table 5.', 'The computing time, the number of search errors, and the multi-reference WER (mWER) are shown as a function of t0.', 'The negative logarithm of t0 is reported.', 'The translation scores for the hypotheses generated with different threshold values t0 are compared to the translation scores obtained with a conservatively large threshold t0 = 10:0 . For each test series, we count the number of sentences whose score is worse than the corresponding score of the test series with the conservatively large threshold t0 = 10:0, and this number is reported as the number of search errors.', 'Depending on the threshold t0, the search algorithm may miss the globally optimal path which typically results in additional translation errors.', 'Decreasing the threshold results in higher mWER due to additional search errors.', 'Table 5: Effect of the beam threshold on the number of search errors (147 sentences).', 'Search t0 CPU time #search mWER Method [sec] error [%] QmS 0.0 0.07 108 42:6 1.0 0.13 85 37:8 2.5 0.35 44 36:6 5.0 1.92 4 34:6 10.0 10.6 0 34:5 IbmS 0.0 0.14 108 43:4 1.0 0.3 84 39:5 2.5 0.8 45 39:1 5.0 4.99 7 38:3 10.0 28.52 0 38:2 Table 6 shows example translations obtained by the three different approaches.', 'Again, the monotone search performs worst.', 'In the second and third translation examples, the IbmS word reordering performs worse than the QmS word reordering, since it can not take properly into account the word reordering due to the German verbgroup.', ""The German finite verbs 'bin' (second example) and 'k\x7fonnten' (third example) are too far away from the personal pronouns 'ich' and 'Sie' (6 respectively 5 source sentence positions)."", 'In the last example, the less restrictive IbmS word reordering leads to a better translation, although the QmS translation is still acceptable.', 'In this paper, we have presented a new, eÃ\x86cient DP-based search procedure for statistical machine translation.', 'The approach assumes that the word reordering is restricted to a few positions in the source sentence.', 'The approach has been successfully tested on the 8 000-word Verbmobil task.', 'Future extensions of the system might include: 1) An extended translation model, where we use more context to predict a source word.', '2) An improved language model, which takes into account syntactic structure, e.g. to ensure that a proper English verbgroup is generated.', '3) A tight coupling with the speech recognizer output.', 'This work has been supported as part of the Verbmobil project (contract number 01 IV 601 A) by the German Federal Ministry of Education, Science, Research and Technology and as part of the Eutrans project (ESPRIT project number 30268) by the European Community.', 'Table 6: Example Translations for the Verbmobil task.', 'Input: Ja , wunderbar . K\x7fonnen wir machen . MonS: Yes, wonderful.', 'Can we do . QmS: Yes, wonderful.', 'We can do that . IbmS: Yes, wonderful.', 'We can do that . Input: Das ist zu knapp , weil ich ab dem dritten in Kaiserslautern bin . Genaugenommen nur am dritten . Wie w\x7fare es denn am \x7fahm Samstag , dem zehnten Februar ? MonS: That is too tight , because I from the third in Kaiserslautern . In fact only on the third . How about \x7fahm Saturday , the tenth of February ? QmS: That is too tight , because I am from the third in Kaiserslautern . In fact only on the third . \x7fAhm how about Saturday , February the tenth ? IbmS: That is too tight , from the third because I will be in Kaiserslautern . In fact only on the third . \x7fAhm how about Saturday , February the tenth ? Input: Wenn Sie dann noch den siebzehnten k\x7fonnten , w\x7fare das toll , ja . MonS: If you then also the seventeenth could , would be the great , yes . QmS: If you could then also the seventeenth , that would be great , yes . IbmS: Then if you could even take seventeenth , that would be great , yes . Input: Ja , das kommt mir sehr gelegen . Machen wir es dann am besten so . MonS: Yes , that suits me perfectly . Do we should best like that . QmS: Yes , that suits me fine . We do it like that then best . IbmS: Yes , that suits me fine . We should best do it like that .']",extractive -C02-1025,C02-1025,7,198,"Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.","Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive -P87-1015_swastika,P87-1015,2,2,"They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.","In considering the relationship between formalisms, we show that it is useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees. find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars On the basis of this observation, we describe a class of formalisms which we call Linear Context- Free Rewriting Systems, and show they are recognizable in polynomial time and generate only semilinear languages.","['CHARACTERIZING STRUCTURAL DESCRIPTIONS PRODUCED BY VARIOUS GRAMMATICAL FORMALISMS*', 'We consider the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.', 'In considering the relationship between formalisms, we show that it is useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees. find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars On the basis of this observation, we describe a class of formalisms which we call Linear Context- Free Rewriting Systems, and show they are recognizable in polynomial time and generate only semilinear languages.', 'Much of the study of grammatical systems in computational linguistics has been focused on the weak generative capacity of grammatical formalism.', 'Little attention, however, has been paid to the structural descriptions that these formalisms can assign to strings, i.e. their strong generative capacity.', 'This aspect of the formalism is both linguistically and computationally important.', ""For example, Gazdar (1985) discusses the applicability of Indexed Grammars (IG's) to Natural Language in terms of the structural descriptions assigned; and Berwick (1984) discusses the strong generative capacity of Lexical-Functional Grammar (LFG) and Government and Bindings grammars (GB)."", ""The work of Thatcher (1973) and Rounds (1969) define formal systems that generate tree sets that are related to CFG's and IG's."", ""We consider properties of the tree sets generated by CFG's, Tree Adjoining Grammars (TAG's), Head Grammars (HG's), Categorial Grammars (CG's), and IG's."", 'We examine both the complexity of the paths of trees in the tree sets, and the kinds of dependencies that the formalisms can impose between paths.', 'These two properties of the tree sets are not only linguistically relevant, but also have computational importance.', ""By considering derivation trees, and thus abstracting away from the details of the composition operation and the structures being manipulated, we are able to state the similarities and differences between the 'This work was partially supported by NSF grants MCS42-19116-CER, MCS82-07294 and DCR-84-10413, ARO grant DAA 29-84-9-0027, and DARPA grant N00014-85-K0018."", 'We are very grateful to Tony Kroc.h, Michael Pails, Sunil Shende, and Mark Steedman for valuable discussions. formalisms.', 'It is striking that from this point of view many formalisms can be grouped together as having identically structured derivation tree sets.', ""This suggests that by generalizing the notion of context-freeness in CFG's, we can define a class of grammatical formalisms that manipulate more complex structures."", ""In this paper, we outline how such family of formalisms can be defined, and show that like CFG's, each member possesses a number of desirable linguistic and computational properties: in particular, the constant growth property and polynomial recognizability."", ""From Thatcher's (1973) work, it is obvious that the complexity of the set of paths from root to frontier of trees in a local set (the tree set of a CFG) is regular'."", 'We define the path set of a tree 1 as the set of strings that label a path from the root to frontier of 7.', 'The path set of a tree set is the union of the path sets of trees in that tree set.', ""It can be easily shown from Thatcher's result that the path set of every local set is a regular set."", ""As a result, CFG's can not provide the structural descriptions in which there are nested dependencies between symbols labelling a path."", ""For example, CFG's cannot produce trees of the form shown in Figure 1 in which there are nested dependencies between S and NP nodes appearing on the spine of the tree."", 'Gazdar (1985) argues this is the appropriate analysis of unbounded dependencies in the hypothetical Scandinavian language Norwedish.', 'He also argues that paired English complementizers may also require structural descriptions whose path sets have nested dependencies.', ""Head Grammars (HG's), introduced by Pollard (1984), is a formalism that manipulates headed strings: i.e., strings, one of whose symbols is distinguished as the head."", 'Not only is concatenation of these strings possible, but head wrapping can be used to split a string and wrap it around another string.', ""The productions of HG's are very similar to those of CFG's except that the operation used must be made explicit."", ""Thus, the tree sets generated by HG's are similar to those of CFG's, with each node annotated by the operation (concatenation or wrapping) used to combine the headed strings derived by the daughters of Tree Adjoining Grammars, a tree rewriting formalism, was introduced by Joshi, Levy and Takahashi (1975) and Joshi (1983/85)."", 'A TAG consists of a finite set of elementary trees that are either initial trees or auxiliary trees.', 'Trees are composed using an operation called adjoining, which is defined as follows.', 'Let n be some node labeled X in a tree -y (see Figure 3).', 'Let 71 be a tree with root and foot labeled by X.', ""When 7' is adjoined at ?I in the tree 7 we obtain a tree v"."", ""The subtree under,; is excised from 7, the tree 7' is inserted in its place and the excised subtree is inserted below the foot of y'."", 'It can be shown that the path set of the tree set generated by a TAG G is a context-free language.', ""TAG's can be used to give the structural descriptions discussed by Gazdar (1985) for the unbounded nested dependencies in Norwedish, for cross serial dependencies in Dutch subordinate clauses, and for the nestings of paired English complementizers."", ""From the definition of TAG's, it follows that the choice of adjunction is not dependent on the history of the derivation."", ""Like CFG's, the choice is predetermined by a finite number of rules encapsulated in the grammar."", ""Thus, the derivation trees for TAG's have the same structure as local sets."", ""As with HG's derivation structures are annotated; in the case of TAG's, by the trees used for adjunction and addresses of nodes of the elementary tree where adjunctions occurred."", 'We can define derivation trees inductively on the length of the derivation of a tree 1.', 'If 7 is an elementary tree, the derivation tree consists of a single node labeled 7.', ""Suppose -y results from the adjunction of 71, ,-y, at the k distinct tree addresses 141, , nk in some elementary tree 7', respectively."", ""The tree denoting this derivation of 7 is rooted with a node labeled 7' having k subtrees for the derivations of 71, ... ,7a."", 'The edge from the root to the subtree for the derivation of 7i is labeled by the address ni.', 'To show that the derivation tree set of a TAG is a local set, nodes are labeled by pairs consisting of the name of an elementary tree and the address at which it was adjoined, instead of labelling edges with addresses.', ""The following rule corresponds to the above derivation, where 71, , 7k are derived from the auxiliary trees , , fik, respectively. for all addresses n in some elementary tree at which 7' can be adjoined."", ""If 7' is an initial tree we do not include an address on the left-hand side."", ""There has been recent interest in the application of Indexed Grammars (IG's) to natural languages."", ""Gazdar (1985) considers a number of linguistic analyses which IG's (but not CFG's) can make, for example, the Norwedish example shown in Figure 1."", ""The work of Rounds (1969) shows that the path sets of trees derived by IG's (like those of TAG's) are context-free languages."", ""Trees derived by IG's exhibit a property that is not exhibited by the trees sets derived by TAG's or CFG's."", 'Informally, two or more paths can be dependent on each other: for example, they could be required to be of equal length as in the trees in Figure 4. generates such a tree set.', ""We focus on this difference between the tree sets of CFG's and IG's, and formalize the notion of dependence between paths in a tree set in Section 3."", 'An IG can be viewed as a CFG in which each nonterminal is associated with a stack.', 'Each production can push or pop symbols on the stack as can be seen in the following productions that generate tree of the form shown in Figure 4b.', 'Gazdar (1985) argues that sharing of stacks can be used to give analyses for coordination.', ""Analogous to the sharing of stacks in IC's, Lexical-Functional Grammar's (LFG's) use the unification of unbounded hierarchical structures."", ""Unification is used in LFG's to produce structures having two dependent spines of unbounded length as in Figure 5."", 'Bresnan, Kaplan, Peters, and Zaenen (1982) argue that these structures are needed to describe crossed-serial dependencies in Dutch subordinate clauses.', ""Gazdar (1985) considers a restriction of IG's in which no more than one nonterminal on the right-hand-side of a production can inherit the stack from the left-hand-side."", 'Unbounded dependencies between branches are not possible in such a system.', ""TAG's can be shown to be equivalent to this restricted system."", ""Thus, TAG's can not give analyses in which dependencies between arbitrarily large branches exist."", 'Steedman (1986) considers Categorial Grammars in which both the operations of function application and composition may be used, and in which function can specify whether they take their arguments from their right or left.', ""While the generative power of CG's is greater that of CFG's, it appears to be highly constrained."", ""Hence, their relationship to formalisms such as HG's and TAG's is of interest."", 'On the one hand, the definition of composition in Steedman (1985), which technically permits composition of functions with unbounded number of arguments, generates tree sets with dependent paths such as those shown in Figure 6.', 'This kind of dependency arises from the use of the composition operation to compose two arbitrarily large categories.', 'This allows an unbounded amount of information about two separate paths (e.g. an encoding of their length) to be combined and used to influence the later derivation.', ""A consequence of the ability to generate tree sets with this property is that CC's under this definition can generate the following language which can not be generated by either TAG's or HG's."", ""0n0'i'i0'2"bin242bn I n = 711 + n2 } On the other hand, no linguistic use is made of this general form of composition and Steedman (personal communication) and Steedman (1986) argues that a more limited definition of composition is more natural."", 'With this restriction the resulting tree sets will have independent paths.', ""The equivalence of CC's with this restriction to TAG's and HG's is, however, still an open problem."", 'An extension of the TAG system was introduced by Joshi et al. (1975) and later redefined by Joshi (1987) in which the adjunction operation is defined on sets of elementary trees rather than single trees.', 'A multicomponent Tree Adjoining Grammar (MCTAG) consists of a finite set of finite elementary tree sets.', 'We must adjoin all trees in an auxiliary tree set together as a single step in the derivation.', 'The adjunction operation with respect to tree sets (multicomponent adjunction) is defined as follows.', 'Each member of a set of trees can be adjoined into distinct nodes of trees in a single elementary tree set, i.e, derivations always involve the adjunction of a derived auxiliary tree set into an elementary tree set.', ""Lilo CFG's, TAG's, and HG's the derivation tree set of a MCTAG will be a local set."", 'The derivation trees of a MCTAG are similar to those of a TAG.', 'Instead of the names of elementary trees of a TAG, the nodes are labeled by a sequence of names of trees in an elementary tree set.', 'Since trees in a tree set are adjoined together, the addressing scheme uses a sequence of pairings of the address and name of the elementary tree adjoined at that address.', 'The following context-free production captures the derivation step of the grammar shown in Figure 7, in which the trees in the auxiliary tree set are adjoined into themselves at the root node (address c).', '((fii, Q2, Pa) , —■ (01, i32, 03) , e),(02,e) Oa, en) The path complexity of the tee set generated by a MCTAG is not necessarily context-free.', ""Like the string languages of MCTAG's, the complexity of the path set increases as the cardinality of the elementary tee sets increases, though both the string languages and path sets will always be semilinear."", ""MCTAG's are able to generate tee sets having dependent paths."", 'For example, the MCTAG shown in Figure 7 generates trees of the form shown in Figure 4b.', 'The number of paths that can be dependent is bounded by the grammar (in fact the maximum cardinality of a tree set determines this bound).', ""Hence, trees shown in Figure 8 can not be generated by any MCTAG (but can be generated by an IG) because the number of pairs of dependent paths grows with n. Since the derivation tees of TAG's, MCTAG's, and HG's are local sets, the choice of the structure used at each point in a derivation in these systems does not depend on the context at that point within the derivation."", ""Thus, as in CFG's, at any point in the derivation, the set of structures that can be applied is determined only by a finite set of rules encapsulated by the grammar."", 'We characterize a class of formalisms that have this property in Section 4.', 'We loosely describe the class of all such systems as Linear Context-Free Rewriting Formalisms.', 'As is described in Section 4, the property of having a derivation tree set that is a local set appears to be useful in showing important properties of the languages generated by the formalisms.', ""The semilinearity of Tree Adjoining Languages (TAL's), MCTAL's, and Head Languages (HL's) can be proved using this property, with suitable restrictions on the composition operations."", 'Roughly speaking, we say that a tee set contains trees with dependent paths if there are two paths p., = vim., and g., = in each 7 E r such that v., is some, possibly empty, shared initial subpath; v., and wi are not bounded in length; and there is some "dependence" (such as equal length) between the set of all v., and w., for each 7 Er.', 'A tree set may be said to have dependencies between paths if some "appropriate" subset can be shown to have dependent paths as defined above.', 'We attempt to formalize this notion in terms of the tee pumping lemma which can be used to show that a tee set does not have dependent paths.', 'Thatcher (1973) describes a tee pumping lemma for recognizable sets related to the string pumping lemma for regular sets.', 'The tee in Figure 9a can be denoted by t1 i223 where tee substitution is used instead of concatenation.', 'The tee pumping lemma states that if there is tree, t = 22 t2t3, generated by a CFG G, whose height is more than a predetermined bound k, then all trees of the form ti tP3 for each i > 0 will also generated by G (as shown in Figure 9b).', ""The string pumping lemma for CFG's (uvwxy-theorem) can be seen as a corollary of this lemma. from this pumping lemma: a single path can be pumped independently."", 'For example, let us consider a tree set containing trees of the form shown in Figure 4a.', 'The tree t2 must be on one of the two branches.', 'Pumping t2 will change only one branch and leave the other branch unaffected.', 'Hence, the resulting trees will no longer have two branches of equal size.', ""We can give a tree pumping lemma for TAG's by adapting the uvwxy-theorem for CFL's since the tree sets of TAG's have independent and context-free paths."", 'This pumping lemma states that if there is tree, t = t2t3t4t5, generated by a TAG G, such that its height is more than a predetermined bound k, then all trees of the form ti it tstt ts for each i > 0 will also generated by G. Similarly, for tree sets with independent paths and more complex path sets, tree pumping lemmas can be given.', 'We adapt the string pumping lemma for the class of languages corresponding to the complexity of the path set.', 'A geometrical progression of language families defined by Weir (1987) involves tree sets with increasingly complex path sets.', 'The independence of paths in the tree sets of the k tI grammatical formalism in this hierarchy can be shown by means of tree pumping lemma of the form t1ti3t .', '.', '.t The path set of tree sets at level k +1 have the complexity of the string language of level k. The independence of paths in a tree set appears to be an important property.', 'A formalism generating tree sets with complex path sets can still generate only semilinear languages if its tree sets have independent paths, and semilinear path sets.', 'For example, the formalisms in the hierarchy described above generate semilinear languages although their path sets become increasingly more complex as one moves up the hierarchy.', 'From the point of view of recognition, independent paths in the derivation structures suggests that a top-down parser (for example) can work on each branch independently, which may lead to efficient parsing using an algorithm based on the Divide and Conquer technique.', 'From the discussion so far it is clear that a number of formalisms involve some type of context-free rewriting (they have derivation trees that are local sets).', 'Our goal is to define a class of formal systems, and show that any member of this class will possess certain attractive properties.', ""In the remainder of the paper, we outline how a class of Linear Context-Free Rewriting Systems (LCFRS's) may be defined and sketch how semilinearity and polynomial recognition of these systems follows."", ""In defining LCFRS's, we hope to generalize the definition of CFG's to formalisms manipulating any structure, e.g. strings, trees, or graphs."", 'To be a member of LCFRS a formalism must satisfy two restrictions.', 'First, any grammar must involve a finite number of elementary structures, composed using a finite number of composition operations.', 'These operations, as we see below, are restricted to be size preserving (as in the case of concatenation in CFG) which implies that they will be linear and non-erasing.', 'A second restriction on the formalisms is that choices during the derivation are independent of the context in the derivation.', ""As will be obvious later, their derivation tree sets will be local sets as are those of CFG's."", 'Each derivation of a grammar can be represented by a generalized context-free derivation tree.', 'These derivation trees show how the composition operations were used to derive the final structures from elementary structures.', 'Nodes are annotated by the name of the composition operation used at that step in the derivation.', ""As in the case of the derivation trees of CFG's, nodes are labeled by a member of some finite set of symbols (perhaps only implicit in the grammar as in TAG's) used to denote derived structures."", 'Frontier nodes are annotated by zero arty functions corresponding to elementary structures.', ""Each treelet (an internal node with all its children) represents the use of a rule that is encapsulated by the grammar The grammar encapsulates (either explicitly or implicitly) a finite number of rules that can be written as follows: n > 0 In the case of CFG's, for each production In the case of TAG's, a derivation step in which the derived trees RI, • • • , On are adjoined into fi at rhe addresses • • • • in. would involve the use of the following rule2."", ""The composition operations in the case of CFG's are parameterized by the productions."", ""In TAG's the elementary tree and addresses where adjunction takes place are used to instantiate the operation."", 'To show that the derivation trees of any grammar in LCFRS is a local set, we can rewrite the annotated derivation trees such that every node is labelled by a pair to include the composition operations.', ""These systems are similar to those described by Pollard (1984) as Generalized Context-Free Grammars (GCFG's)."", ""Unlike GCFG's, however, the composition operations of LCFRS's are restricted to be linear (do not duplicate unboundedly large structures) and nonerasing (do not erase unbounded structures, a restriction made in most modern transformational grammars)."", ""These two restrictions impose the constraint that the result of composing any two structures should be a structure whose "size" is the sum of its constituents plus some constant For example, the operation 4, discussed in the case of CFG's (in Section 4.1) adds the constant equal to the sum of the length of the strings VI, un+r• Since we are considering formalisms with arbitrary structures it is difficult to precisely specify all of the restrictions on the composition operations that we believe would appropriately generalize the concatenation operation for the particular structures used by the formalism."", ""In considering recognition of LCFRS's, we make further assumption concerning the contribution of each structure to the input string, and how the composition operations combine structures in this respect."", ""We can show that languages generated by LCFRS's are semilinear as long as the composition operation does not remove any terminal symbols from its arguments."", 'Semilinearity and the closely related constant growth property (a consequence of semilinearity) have been discussed in the context of grammars for natural languages by Joshi (1983/85) and Berwick and Weinberg (1984).', 'Roughly speaking, a language, L, has the property of semilinearity if the number of occurrences of each symbol in any string is a linear combination of the occurrences of these symbols in some fixed finite set of strings.', 'Thus, the length of any string in L is a linear combination of the length of strings in some fixed finite subset of L, and thus L is said to have the constant growth property.', 'Although this property is not structural, it depends on the structural property that sentences can be built from a finite set of clauses of bounded structure as noted by Joshi (1983/85).', 'The property of semilinearity is concerned only with the occurrence of symbols in strings and not their order.', 'Thus, any language that is letter equivalent to a semilinear language is also semilinear.', 'Two strings are letter equivalent if they contain equal number of occurrences of each terminal symbol, and two languages are letter equivalent if every string in one language is letter equivalent to a string in the other language and vice-versa.', ""Since every CFL is known to be semilinear (Parikh, 1966), in order to show semilinearity of some language, we need only show the existence of a letter equivalent CFL Our definition of LCFRS's insists that the composition operations are linear and nonerasing."", 'Hence, the terminal symbols appearing in the structures that are composed are not lost (though a constant number of new symbols may be introduced).', 'If 0(A) gives the number of occurrences of each terminal in the structure named by A, then, given the constraints imposed on the formalism, for each rule A --. fp(Ai, , An) we have the equality where c„ is some constant.', 'We can obtain a letter equivalent CFL defined by a CFG in which the for each rule as above, we have the production A —* A1 Anup where tk (up) = cp.', 'Thus, the language generated by a grammar of a LCFRS is semilinear.', ""We now turn our attention to the recognition of string languages generated by these formalisms (LCFRL's)."", ""As suggested at the end of Section 3, the restrictions that have been specified in the definition of LCFRS's suggest that they can be efficiently recognized."", 'In this section for the purposes of showing that polynomial time recognition is possible, we make the additional restriction that the contribution of a derived structure to the input string can be specified by a bounded sequence of substrings of the input.', 'Since each composition operation is linear and nonerasing, a bounded sequences of substrings associated with the resulting structure is obtained by combining the substrings in each of its arguments using only the concatenation operation, including each substring exactly once.', ""CFG's, TAG's, MCTAG's and HG's are all members of this class since they satisfy these restrictions."", ""Giving a recognition algorithm for LCFRL's involves describing the substrings of the input that are spanned by the structures derived by the LCFRS's and how the composition operation combines these substrings."", ""For example, in TAG's a derived auxiliary tree spans two substrings (to the left and right of the foot node), and the adjunction operation inserts another substring (spanned by the subtree under the node where adjunction takes place) between them (see Figure 3)."", 'We can represent any derived tree of a TAG by the two substrings that appear in its frontier, and then define how the adjunction operation concatenates the substrings.', ""Similarly, for all the LCFRS's, discussed in Section 2, we can define the relationship between a structure and the sequence of substrings it spans, and the effect of the composition operations on sequences of substrings."", ""A derived structure will be mapped onto a sequence zi of substrings (not necessarily contiguous in the input), and the composition operations will be mapped onto functions that can defined as follows3. f((zi,• • • , zni), (m.,• • • ,Yn3)) = (Z1, • • • , Zn3) where each z, is the concatenation of strings from z,'s and yk's."", 'The linear and nonerasing assumptions about the operations discussed in Section 4.1 require that each z, and yk is used exactly once to define the strings zi, ,z1,3.', 'Some of the operations will be constant functions, corresponding to elementary structures, and will be written as f () = zi), where each z, is a constant, the string of terminal symbols al an,,,.', 'This representation of structures by substrings and the composition operation by its effect on substrings is related to the work of Rounds (1985).', ""Although embedding this version of LCFRS's in the framework of ILFP developed by Rounds (1985) is straightforward, our motivation was to capture properties shared by a family of grammatical systems and generalize them defining a class of related formalisms."", 'This class of formalisms have the properties that their derivation trees are local sets, and manipulate objects, using a finite number of composition operations that use a finite number of symbols.', 'With the additional assumptions, inspired by Rounds (1985), we can show that members of this class can be recognized in polynomial time.', 'We use Alternating Turing Machines (Chandra, Kozen, and Stockmeyer, 1981) to show that polynomial time recognition is possible for the languages discussed in Section 4.3.', 'An ATM has two types of states, existential and universal.', 'In an existential state an ATM behaves like a nondeterministic TM, accepting if one of the applicable moves leads to acceptance; in an universal state the ATM accepts if all the applicable moves lead to acceptance.', 'An ATM may be thought of as spawning independent processes for each applicable move.', 'A k-tape ATM, M, has a read-only input tape and k read-write work tapes.', 'A step of an ATM consists of reading a symbol from each tape and optionally moving each head to the left or right one tape cell.', 'A configuration of M consists of a state of the finite control, the nonblank contents of the input tape and k work tapes, and the position of each head.', 'The space of a configuration is the sum of the lengths of the nonblank tape contents of the k work tapes.', 'M works in space S(n) if for every string that M accepts no configuration exceeds space S(n).', 'It has been shown in (Chandra et al., 1981) that if M works in space log n then there is a deterministic TM which accepts the same language in polynomial time.', 'In the next section, we show how an ATM can accept the strings generated by a grammar in a LCFRS formalism in logspace, and hence show that each family can be recognized in polynomial time.', 'We define an ATM, M, recognizing a language generated by a grammar, G, having the properties discussed in Section 43.', 'It can be seen that M performs a top-down recognition of the input al ... nin logspace.', 'The rewrite rules and the definition of the composition operations may be stored in the finite state control since G uses a finite number of them.', 'Suppose M has to determine whether the k substrings ,.. .,ak can be derived from some symbol A.', 'Since each zi is a contiguous substring of the input (say ai,), and no two substrings overlap, we can represent zi by the pair of integers (i2, i2).', 'We assume that M is in an existential state qA, with integers i1 and i2 representing zi in the (2i — 1)th and 22th work tape, for 1 < i < k. For each rule p : A fp(B, C) such that fp is mapped onto the function fp defined by the following rule. jp((xi,.. • ,rnt), (1ii, • • • • Yn3))= (Zi , • • • , Zk) M breaks xi , zk into substrings xi, , xn, and yi,...,y" conforming to the definition of fp.', 'M spawns as many processes as there are ways of breaking up ri , .. • , zt, and rules with A on their left-hand-side.', 'Each spawned process must check if xi , , xn, and , yn, can be derived from B and C, respectively.', ""To do this, the x's and y's are stored in the next 2ni + 2n2 tapes, and M goes to a universal state."", 'Two processes are spawned requiring B to derive z,.., and C to derive yi , , y,.', 'Thus, for example, one successor process will be have M to be in the existential state qa with the indices encoding xi , , xn, in the first 2n i tapes.', 'For rules p : A fpo such that fp is constant function, giving an elementary structure, fp is defined such that fp() = (Si ... xi() where each z is a constant string.', 'M must enter a universal state and check that each of the k constant substrings are in the appropriate place (as determined by the contents of the first 2k work tapes) on the input tape.', 'In addition to the tapes required to store the indices, M requires one work tape for splitting the substrings.', 'Thus, the ATM has no more than 6km" + 1 work tapes, where km" is the maximum number of substrings spanned by a derived structure.', 'Since the work tapes store integers (which can be written in binary) that never exceed the size of the input, no configuration has space exceeding 0(log n).', 'Thus, M works in logspace and recognition can be done on a deterministic TM in polynomial tape.', 'We have studied the structural descriptions (tree sets) that can be assigned by various grammatical systems, and classified these formalisms on the basis of two features: path complexity; and path independence.', ""We contrasted formalisms such as CFG's, HG's, TAG's and MCTAG's, with formalisms such as IG's and unificational systems such as LFG's and FUG's."", 'We address the question of whether or not a formalism can generate only structural descriptions with independent paths.', 'This property reflects an important aspect of the underlying linguistic theory associated with the formalism.', 'In a grammar which generates independent paths the derivations of sibling constituents can not share an unbounded amount of information.', 'The importance of this property becomes clear in contrasting theories underlying GPSG (Gazdar, Klein, Pulluna, and Sag, 1985), and GB (as described by Berwick, 1984) with those underlying LFG and FUG.', ""It is interesting to note, however, that the ability to produce a bounded number of dependent paths (where two dependent paths can share an unbounded amount of information) does not require machinery as powerful as that used in LFG, FUG and IG's."", ""As illustrated by MCTAG's, it is possible for a formalism to give tree sets with bounded dependent paths while still sharing the constrained rewriting properties of CFG's, HG's, and TAG's."", 'In order to observe the similarity between these constrained systems, it is crucial to abstract away from the details of the structures and operations used by the system.', ""The similarities become apparent when they are studied at the level of derivation structures: derivation nee sets of CFG's, HG's, TAG's, and MCTAG's are all local sets."", 'Independence of paths at this level reflects context freeness of rewriting and suggests why they can be recognized efficiently.', 'As suggested in Section 4.3.2, a derivation with independent paths can be divided into subcomputations with limited sharing of information.', 'We outlined the definition of a family of constrained grammatical formalisms, called Linear Context-Free Rewriting Systems.', ""This family represents an attempt to generalize the properties shared by CFG's, HG's, TAG's, and MCTAG's."", ""Like HG's, TAG's, and MCTAG's, members of LCFRS can manipulate structures more complex than terminal strings and use composition operations that are more complex that concatenation."", ""We place certain restrictions on the composition operations of LCFRS's, restrictions that are shared by the composition operations of the constrained grammatical systems that we have considered."", 'The operations must be linear and nonerasing, i.e., they can not duplicate or erase structure from their arguments.', ""Notice that even though IG's and LFG's involve CFG-like productions, they are (linguistically) fundamentally different from CFG's because the composition operations need not be linear."", ""By sharing stacks (in IG's) or by using nonlinear equations over f-structures (in FUG's and LFG's), structures with unbounded dependencies between paths can be generated."", ""LCFRS's share several properties possessed by the class of mildly context-sensitive formalisms discussed by Joshi (1983/85)."", 'The results described in this paper suggest a characterization of mild context-sensitivity in terms of generalized context-freeness.', ""Having defined LCFRS's, in Section 4.2 we established the semilinearity (and hence constant growth property) of the languages generated."", 'In considering the recognition of these languages, we were forced to be more specific regarding the relationship between the structures derived by these formalisms and the substrings they span.', 'We insisted that each structure dominates a bounded number of (not necessarily adjacent) substrings.', 'The composition operations are mapped onto operations that use concatenation to define the substrings spanned by the resulting structures.', 'We showed that any system defined in this way can be recognized in polynomial time.', 'Members of LCFRS whose operations have this property can be translated into the ILFP notation (Rounds, 1985).', 'However, in order to capture the properties of various grammatical systems under consideration, our notation is more restrictive that ILFP, which was designed as a general logical notation to characterize the complete class of languages that are recognizable in polynomial time.', ""It is known that CFG's, HG's, and TAG's can be recognized in polynomial time since polynomial time algorithms exist in for each of these formalisms."", ""A corollary of the result of Section 4.3 is that polynomial time recognition of MCTAG's is possible."", 'As discussed in Section 3, independent paths in tree sets, rather than the path complexity, may be crucial in characterizing semilinearity and polynomial time recognition.', 'We would like to relax somewhat the constraint on the path complexity of formalisms in LCFRS.', 'Formalisms such as the restricted indexed grammars (Gazdar, 1985) and members of the hierarchy of grammatical systems given by Weir (1987) have independent paths, but more complex path sets.', 'Since these path sets are semilinear, the property of independent paths in their tree sets is sufficient to cause semilinearity of the languages generated by them.', ""In addition, the restricted version of CG's (discussed in Section 6) generates tree sets with independent paths and we hope that it can be included in a more general definition of LCFRS's containing formalisms whose tree sets have path sets that are themselves LCFRL's (as in the case of the restricted indexed grammars, and the hierarchy defined by Weir)."", ""LCFRS's have only been loosely defined in this paper; we have yet to provide a complete set of formal properties associated with members of this class."", ""In this paper, our goal has been to use the notion of LCFRS's to classify grammatical systems on the basis of their strong generative capacity."", 'In considering this aspect of a formalism, we hope to better understand the relationship between the structural descriptions generated by the grammars of a formalism, and the properties of semilinearity and polynomial recognizability.']",abstractive -W11-2123_vardha,W11-2123,5,199,"For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.","For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.","['KenLM: Faster and Smaller Language Model Queries', 'We present KenLM, a library that implements two data structures for efficient language model queries, reducing both time and costs.', 'The structure uses linear probing hash tables and is designed for speed.', 'Compared with the widely- SRILM, our is 2.4 times as fast while using 57% of the mem- The structure is a trie with bit-level packing, sorted records, interpolation search, and optional quantization aimed lower memory consumption. simultaneously uses less memory than the smallest lossless baseline and less CPU than the baseline.', 'Our code is thread-safe, and integrated into the Moses, cdec, and Joshua translation systems.', 'This paper describes the several performance techniques used and presents benchmarks against alternative implementations.', 'Language models are widely applied in natural language processing, and applications such as machine translation make very frequent queries.', 'This paper presents methods to query N-gram language models, minimizing time and space costs.', 'Queries take the form p(wn|wn−1 1 ) where wn1 is an n-gram.', 'Backoff-smoothed models estimate this probability based on the observed entry with longest matching history wnf , returning where the probability p(wn|wn−1 f ) and backoff penalties b(wn−1 i ) are given by an already-estimated model.', 'The problem is to store these two values for a large and sparse set of n-grams in a way that makes queries efficient.', 'Many packages perform language model queries.', 'Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.', 'IRSTLM 5.60.02 (Federico et al., 2008) is a sorted trie implementation designed for lower memory consumption.', 'MITLM 0.4 (Hsu and Glass, 2008) is mostly designed for accurate model estimation, but can also compute perplexity.', 'RandLM 0.2 (Talbot and Osborne, 2007) stores large-scale models in less memory using randomized data structures.', 'BerkeleyLM revision 152 (Pauls and Klein, 2011) implements tries based on hash tables and sorted arrays in Java with lossy quantization.', 'Sheffield Guthrie and Hepple (2010) explore several randomized compression techniques, but did not release code.', 'TPT Germann et al. (2009) describe tries with better locality properties, but did not release code.', 'These packages are further described in Section 3.', 'We substantially outperform all of them on query speed and offer lower memory consumption than lossless alternatives.', 'Performance improvements transfer to the Moses (Koehn et al., 2007), cdec (Dyer et al., 2010), and Joshua (Li et al., 2009) translation systems where our code has been integrated.', 'Our open-source (LGPL) implementation is also available for download as a standalone package with minimal (POSIX and g++) dependencies.', 'We implement two data structures: PROBING, designed for speed, and TRIE, optimized for memory.', 'The set of n-grams appearing in a model is sparse, and we want to efficiently find their associated probabilities and backoff penalties.', 'An important subproblem of language model storage is therefore sparse mapping: storing values for sparse keys using little memory then retrieving values given keys using little time.', 'We use two common techniques, hash tables and sorted arrays, describing each before the model that uses the technique.', 'Hash tables are a common sparse mapping technique used by SRILM’s default and BerkeleyLM’s hashed variant.', 'Keys to the table are hashed, using for example Austin Appleby’s MurmurHash2, to integers evenly distributed over a large range.', 'This range is collapsed to a number of buckets, typically by taking the hash modulo the number of buckets.', 'Entries landing in the same bucket are said to collide.', 'Several methods exist to handle collisions; we use linear probing because it has less memory overhead when entries are small.', 'Linear probing places at most one entry in each bucket.', 'When a collision occurs, linear probing places the entry to be inserted in the next (higher index) empty bucket, wrapping around as necessary.', 'Therefore, a populated probing hash table consists of an array of buckets that contain either one entry or are empty.', 'Non-empty buckets contain an entry belonging to them or to a preceding bucket where a conflict occurred.', 'Searching a probing hash table consists of hashing the key, indexing the corresponding bucket, and scanning buckets until a matching key is found or an empty bucket is encountered, in which case the key does not exist in the table.', 'Linear probing hash tables must have more buckets than entries, or else an empty bucket will never be found.', 'The ratio of buckets to entries is controlled by space multiplier m > 1.', 'As the name implies, space is O(m) and linear in the number of entries.', 'The fraction of buckets that are empty is m−1 m , so average lookup time is O( m 1) and, crucially, constant in the number of entries.', 'When keys are longer than 64 bits, we conserve space by replacing the keys with their 64-bit hashes.', 'With a good hash function, collisions of the full 64bit hash are exceedingly rare: one in 266 billion queries for our baseline model will falsely find a key not present.', 'Collisions between two keys in the table can be identified at model building time.', 'Further, the special hash 0 suffices to flag empty buckets.', 'The PROBING data structure is a rather straightforward application of these hash tables to store Ngram language models.', 'Unigram lookup is dense so we use an array of probability and backoff values.', 'For 2 < n < N, we use a hash table mapping from the n-gram to the probability and backoff3.', 'Vocabulary lookup is a hash table mapping from word to vocabulary index.', 'In all cases, the key is collapsed to its 64-bit hash.', 'Given counts cn1 where e.g. c1 is the vocabulary size, total memory consumption, in bits, is Our PROBING data structure places all n-grams of the same order into a single giant hash table.', 'This differs from other implementations (Stolcke, 2002; Pauls and Klein, 2011) that use hash tables as nodes in a trie, as explained in the next section.', 'Our implementation permits jumping to any n-gram of any length with a single lookup; this appears to be unique among language model implementations.', 'Sorted arrays store key-value pairs in an array sorted by key, incurring no space overhead.', 'SRILM’s compact variant, IRSTLM, MITLM, and BerkeleyLM’s sorted variant are all based on this technique.', 'Given a sorted array A, these other packages use binary search to find keys in O(log |A|) time.', 'We reduce this to O(log log |A|) time by evenly distributing keys over their range then using interpolation search4 (Perl et al., 1978).', 'Interpolation search formalizes the notion that one opens a dictionary near the end to find the word “zebra.” Initially, the algorithm knows the array begins at b +— 0 and ends at e +— |A|−1.', 'Given a key k, it estimates the position If the estimate is exact (A[pivot] = k), then the algorithm terminates succesfully.', 'If e < b then the key is not found.', 'Otherwise, the scope of the search problem shrinks recursively: if A[pivot] < k then this becomes the new lower bound: l +— pivot; if A[pivot] > k then u +— pivot.', 'Interpolation search is therefore a form of binary search with better estimates informed by the uniform key distribution.', 'If the key distribution’s range is also known (i.e. vocabulary identifiers range from 0 to the number of words), then interpolation search can use this information instead of reading A[0] and A[|A |− 1] to estimate pivots; this optimization alone led to a 24% speed improvement.', 'The improvement is due to the cost of bit-level reads and avoiding reads that may fall in different virtual memory pages.', 'Vocabulary lookup is a sorted array of 64-bit word hashes.', 'The index in this array is the vocabulary identifier.', 'This has the effect of randomly permuting vocabulary identifiers, meeting the requirements of interpolation search when vocabulary identifiers are used as keys.', 'While sorted arrays could be used to implement the same data structure as PROBING, effectively making m = 1, we abandoned this implementation because it is slower and larger than a trie implementation.', 'The trie data structure is commonly used for language modeling.', 'Our TRIE implements the popular reverse trie, in which the last word of an n-gram is looked up first, as do SRILM, IRSTLM’s inverted variant, and BerkeleyLM except for the scrolling variant.', 'Figure 1 shows an example.', 'Nodes in the trie are based on arrays sorted by vocabulary identifier.', 'We maintain a separate array for each length n containing all n-gram entries sorted in suffix order.', 'Therefore, for n-gram wn1 , all leftward extensions wn0 are an adjacent block in the n + 1-gram array.', 'The record for wn1 stores the offset at which its extensions begin.', 'Reading the following record’s offset indicates where the block ends.', 'This technique was introduced by Clarkson and Rosenfeld (1997) and is also implemented by IRSTLM and BerkeleyLM’s compressed option.', 'SRILM inefficiently stores 64-bit pointers.', 'Unigram records store probability, backoff, and an index in the bigram table.', 'Entries for 2 < n < N store a vocabulary identifier, probability, backoff, and an index into the n + 1-gram table.', 'The highestorder N-gram array omits backoff and the index, since these are not applicable.', 'Values in the trie are minimally sized at the bit level, improving memory consumption over trie implementations in SRILM, IRSTLM, and BerkeleyLM.', 'Given n-gram counts {cn}Nn=1, we use Flog2 c1] bits per vocabulary identifier and Flog2 cn] per index into the table of ngrams.', 'When SRILM estimates a model, it sometimes removes n-grams but not n + 1-grams that extend it to the left.', 'In a model we built with default settings, 1.2% of n + 1-grams were missing their ngram suffix.', 'This causes a problem for reverse trie implementations, including SRILM itself, because it leaves n+1-grams without an n-gram node pointing to them.', 'We resolve this problem by inserting an entry with probability set to an otherwise-invalid value (−oc).', 'Queries detect the invalid probability, using the node only if it leads to a longer match.', 'By contrast, BerkeleyLM’s hash and compressed variants will return incorrect results based on an n −1-gram.', 'Floating point values may be stored in the trie exactly, using 31 bits for non-positive log probability and 32 bits for backoff5.', 'To conserve memory at the expense of accuracy, values may be quantized using q bits per probability and r bits per backoff6.', 'We allow any number of bits from 2 to 25, unlike IRSTLM (8 bits) and BerkeleyLM (17−20 bits).', 'To quantize, we use the binning method (Federico and Bertoldi, 2006) that sorts values, divides into equally sized bins, and averages within each bin.', 'The cost of storing these averages, in bits, is Because there are comparatively few unigrams, we elected to store them byte-aligned and unquantized, making every query faster.', 'Unigrams also have 64-bit overhead for vocabulary lookup.', 'Using cn to denote the number of n-grams, total memory consumption of TRIE, in bits, is plus quantization tables, if used.', 'The size of TRIE is particularly sensitive to F1092 c11, so vocabulary filtering is quite effective at reducing model size.', 'SRILM (Stolcke, 2002) is widely used within academia.', ""It is generally considered to be fast (Pauls 29 − 1 probabilities and 2' − 2 non-zero backoffs. and Klein, 2011), with a default implementation based on hash tables within each trie node."", 'Each trie node is individually allocated and full 64-bit pointers are used to find them, wasting memory.', 'The compact variant uses sorted arrays instead of hash tables within each node, saving some memory, but still stores full 64-bit pointers.', 'With some minor API changes, namely returning the length of the n-gram matched, it could also be faster—though this would be at the expense of an optimization we explain in Section 4.1.', 'The PROBING model was designed to improve upon SRILM by using linear probing hash tables (though not arranged in a trie), allocating memory all at once (eliminating the need for full pointers), and being easy to compile.', 'IRSTLM (Federico et al., 2008) is an open-source toolkit for building and querying language models.', 'The developers aimed to reduce memory consumption at the expense of time.', 'Their default variant implements a forward trie, in which words are looked up in their natural left-to-right order.', 'However, their inverted variant implements a reverse trie using less CPU and the same amount of memory7.', 'Each trie node contains a sorted array of entries and they use binary search.', 'Compared with SRILM, IRSTLM adds several features: lower memory consumption, a binary file format with memory mapping, caching to increase speed, and quantization.', 'Our TRIE implementation is designed to improve upon IRSTLM using a reverse trie with improved search, bit level packing, and stateful queries.', 'IRSTLM’s quantized variant is the inspiration for our quantized variant.', 'Unfortunately, we were unable to correctly run the IRSTLM quantized variant.', 'The developers suggested some changes, such as building the model from scratch with IRSTLM, but these did not resolve the problem.', 'Our code has been publicly available and intergrated into Moses since October 2010.', 'Later, BerkeleyLM (Pauls and Klein, 2011) described ideas similar to ours.', 'Most similar is scrolling queries, wherein left-to-right queries that add one word at a time are optimized.', 'Both implementations employ a state object, opaque to the application, that carries information from one query to the next; we discuss both further in Section 4.2.', 'State is implemented in their scrolling variant, which is a trie annotated with forward and backward pointers.', 'The hash variant is a reverse trie with hash tables, a more memory-efficient version of SRILM’s default.', 'While the paper mentioned a sorted variant, code was never released.', 'The compressed variant uses block compression and is rather slow as a result.', 'A direct-mapped cache makes BerkeleyLM faster on repeated queries, but their fastest (scrolling) cached version is still slower than uncached PROBING, even on cache-friendly queries.', 'For all variants, we found that BerkeleyLM always rounds the floating-point mantissa to 12 bits then stores indices to unique rounded floats.', 'The 1-bit sign is almost always negative and the 8-bit exponent is not fully used on the range of values, so in practice this corresponds to quantization ranging from 17 to 20 total bits.', 'Lossy compressed models RandLM (Talbot and Osborne, 2007) and Sheffield (Guthrie and Hepple, 2010) offer better memory consumption at the expense of CPU and accuracy.', 'These enable much larger models in memory, compensating for lost accuracy.', 'Typical data structures are generalized Bloom filters that guarantee a customizable probability of returning the correct answer.', 'Minimal perfect hashing is used to find the index at which a quantized probability and possibly backoff are stored.', 'These models generally outperform our memory consumption but are much slower, even when cached.', 'In addition to the optimizations specific to each datastructure described in Section 2, we implement several general optimizations for language modeling.', 'Applications such as machine translation use language model probability as a feature to assist in choosing between hypotheses.', 'Dynamic programming efficiently scores many hypotheses by exploiting the fact that an N-gram language model conditions on at most N − 1 preceding words.', 'We call these N − 1 words state.', 'When two partial hypotheses have equal state (including that of other features), they can be recombined and thereafter efficiently handled as a single packed hypothesis.', 'If there are too many distinct states, the decoder prunes low-scoring partial hypotheses, possibly leading to a search error.', 'Therefore, we want state to encode the minimum amount of information necessary to properly compute language model scores, so that the decoder will be faster and make fewer search errors.', 'We offer a state function s(wn1) = wn� where substring wn� is guaranteed to extend (to the right) in the same way that wn1 does for purposes of language modeling.', 'The state function is integrated into the query process so that, in lieu of the query p(wnjwn−1 1 ), the application issues query p(wnjs(wn−1 1 )) which also returns s(wn1 ).', 'The returned state s(wn1) may then be used in a followon query p(wn+1js(wn1)) that extends the previous query by one word.', 'These make left-to-right query patterns convenient, as the application need only provide a state and the word to append, then use the returned state to append another word, etc.', 'We have modified Moses (Koehn et al., 2007) to keep our state with hypotheses; to conserve memory, phrases do not keep state.', 'Syntactic decoders, such as cdec (Dyer et al., 2010), build state from null context then store it in the hypergraph node for later extension.', 'Language models that contain wi must also contain prefixes wi for 1 G i G k. Therefore, when the model is queried for p(wnjwn−1 1 ) but the longest matching suffix is wnf , it may return state s(wn1) = wnf since no longer context will be found.', 'IRSTLM and BerkeleyLM use this state function (and a limit of N −1 words), but it is more strict than necessary, so decoders using these packages will miss some recombination opportunities.', 'State will ultimately be used as context in a subsequent query.', 'If the context wnf will never extend to the right (i.e. wnf v is not present in the model for all words v) then no subsequent query will match the full context.', 'If the log backoff of wnf is also zero (it may not be in filtered models), then wf should be omitted from the state.', 'This logic applies recursively: if wnf+1 similarly does not extend and has zero log backoff, it too should be omitted, terminating with a possibly empty context.', 'We indicate whether a context with zero log backoff will extend using the sign bit: +0.0 for contexts that extend and −0.0 for contexts that do not extend.', 'RandLM and SRILM also remove context that will not extend, but SRILM performs a second lookup in its trie whereas our approach has minimal additional cost.', 'Section 4.1 explained that state s is stored by applications with partial hypotheses to determine when they can be recombined.', 'In this section, we extend state to optimize left-to-right queries.', 'All language model queries issued by machine translation decoders follow a left-to-right pattern, starting with either the begin of sentence token or null context for mid-sentence fragments.', 'Storing state therefore becomes a time-space tradeoff; for example, we store state with partial hypotheses in Moses but not with each phrase.', 'To optimize left-to-right queries, we extend state to store backoff information: where m is the minimal context from Section 4.1 and b is the backoff penalty.', 'Because b is a function, no additional hypothesis splitting happens.', 'As noted in Section 1, our code finds the longest matching entry wnf for query p(wn|s(wn−1 f ) The probability p(wn|wn−1 f ) is stored with wnf and the backoffs are immediately accessible in the provided state s(wn−1 When our code walks the data structure to find wnf , it visits wnn, wnn−1, ... , wnf .', 'Each visited entry wni stores backoff b(wni ).', 'These are written to the state s(wn1) and returned so that they can be used for the following query.', 'Saving state allows our code to walk the data structure exactly once per query.', 'Other packages walk their respective data structures once to find wnf and again to find {b(wn−1 i )}f−1 i=1if necessary.', 'In both cases, SRILM walks its trie an additional time to minimize context as mentioned in Section 4.1.', 'BerkeleyLM uses states to optimistically search for longer n-gram matches first and must perform twice as many random accesses to retrieve backoff information.', 'Further, it needs extra pointers in the trie, increasing model size by 40%.', 'This makes memory usage comparable to our PROBING model.', 'The PROBING model can perform optimistic searches by jumping to any n-gram without needing state and without any additional memory.', 'However, this optimistic search would not visit the entries necessary to store backoff information in the outgoing state.', 'Though we do not directly compare state implementations, performance metrics in Table 1 indicate our overall method is faster.', 'Only IRSTLM does not support threading.', 'In our case multi-threading is trivial because our data structures are read-only and uncached.', 'Memory mapping also allows the same model to be shared across processes on the same machine.', 'Along with IRSTLM and TPT, our binary format is memory mapped, meaning the file and in-memory representation are the same.', 'This is especially effective at reducing load time, since raw bytes are read directly to memory—or, as happens with repeatedly used models, are already in the disk cache.', 'Lazy mapping reduces memory requirements by loading pages from disk only as necessary.', 'However, lazy mapping is generally slow because queries against uncached pages must wait for the disk.', 'This is especially bad with PROBING because it is based on hashing and performs random lookups, but it is not intended to be used in low-memory scenarios.', 'TRIE uses less memory and has better locality.', 'However, TRIE partitions storage by n-gram length, so walking the trie reads N disjoint pages.', 'TPT has theoretically better locality because it stores ngrams near their suffixes, thereby placing reads for a single query in the same or adjacent pages.', 'We do not experiment with models larger than physical memory in this paper because TPT is unreleased, factors such as disk speed are hard to replicate, and in such situations we recommend switching to a more compact representation, such as RandLM.', 'In all of our experiments, the binary file (whether mapped or, in the case of most other packages, interpreted) is loaded into the disk cache in advance so that lazy mapping will never fault to disk.', 'This is similar to using the Linux MAP POPULATE flag that is our default loading mechanism.', 'This section measures performance on shared tasks in order of increasing complexity: sparse lookups, evaluating perplexity of a large file, and translation with Moses.', 'Our test machine has two Intel Xeon E5410 processors totaling eight cores, 32 GB RAM, and four Seagate Barracuda disks in software RAID 0 running Linux 2.6.18.', 'Sparse lookup is a key subproblem of language model queries.', 'We compare three hash tables: our probing implementation, GCC’s hash set, and Boost’s8 unordered.', 'For sorted lookup, we compare interpolation search, standard C++ binary search, and standard C++ set based on red-black trees.', 'The data structure was populated with 64-bit integers sampled uniformly without replacement.', 'For queries, we uniformly sampled 10 million hits and 10 million misses.', 'The same numbers were used for each data structure.', 'Time includes all queries but excludes random number generation and data structure population.', 'Figure 2 shows timing results.', 'For the PROBING implementation, hash table sizes are in the millions, so the most relevant values are on the right size of the graph, where linear probing wins.', 'It also uses less memory, with 8 bytes of overhead per entry (we store 16-byte entries with m = 1.5); linked list implementations hash set and unordered require at least 8 bytes per entry for pointers.', 'Further, the probing hash table does only one random lookup per query, explaining why it is faster on large data.', 'Interpolation search has a more expensive pivot but performs less pivoting and reads, so it is slow on small data and faster on large data.', 'This suggests a strategy: run interpolation search until the range narrows to 4096 or fewer entries, then switch to binary search.', 'However, reads in the TRIE data structure are more expensive due to bit-level packing, so we found that it is faster to use interpolation search the entire time.', 'Memory usage is the same as with binary search and lower than with set.', 'For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.', 'The model was built with open vocabulary, modified Kneser-Ney smoothing, and default pruning settings that remove singletons of order 3 and higher.', 'Unlike Germann et al. (2009), we chose a model size so that all benchmarks fit comfortably in main memory.', 'Benchmarks use the package’s binary format; our code is also the fastest at building a binary file.', 'As noted in Section 4.4, disk cache state is controlled by reading the entire binary file before each test begins.', 'For RandLM, we used the settings in the documentation: 8 bits per value and false positive probability 1 256.', 'We evaluate the time and memory consumption of each data structure by computing perplexity on 4 billion tokens from the English Gigaword corpus (Parker et al., 2009).', 'Tokens were converted to vocabulary identifiers in advance and state was carried from each query to the next.', 'Table 1 shows results of the benchmark.', 'Compared to decoding, this task is cache-unfriendly in that repeated queries happen only as they naturally occur in text.', 'Therefore, performance is more closely tied to the underlying data structure than to the cache.', 'In fact, we found that enabling IRSTLM’s cache made it slightly slower, so results in Table 1 use IRSTLM without caching.', 'Moses sets the cache size parameter to 50 so we did as well; the resulting cache size is 2.82 GB.', 'The results in Table 1 show PROBING is 81% faster than TRIE, which is in turn 31% faster than the fastest baseline.', 'Memory usage in PROBING is high, though SRILM is even larger, so where memory is of concern we recommend using TRIE, if it fits in memory.', 'For even larger models, we recommend RandLM; the memory consumption of the cache is not expected to grow with model size, and it has been reported to scale well.', 'Another option is the closedsource data structures from Sheffield (Guthrie and Hepple, 2010).', 'Though we are not able to calculate their memory usage on our model, results reported in their paper suggest lower memory consumption than TRIE on large-scale models, at the expense of CPU time.', 'This task measures how well each package performs in machine translation.', 'We run the baseline Moses system for the French-English track of the 2011 Workshop on Machine Translation,9 translating the 3003-sentence test set.', 'Based on revision 4041, we modified Moses to print process statistics before terminating.', 'Process statistics are already collected by the kernel (and printing them has no meaningful impact on performance).', 'SRILM’s compact variant has an incredibly expensive destructor, dwarfing the time it takes to perform translation, and so we also modified Moses to avoiding the destructor by calling exit instead of returning normally.', 'Since our destructor is an efficient call to munmap, bypassing the destructor favors only other packages.', 'The binary language model from Section 5.2 and text phrase table were forced into disk cache before each run.', 'Time starts when Moses is launched and therefore includes model loading time.', 'These conaUses lossy compression. bThe 8-bit quantized variant returned incorrect probabilities as explained in Section 3.', 'It did 402 queries/ms using 1.80 GB. cMemory use increased during scoring due to batch processing (MIT) or caching (Rand).', 'The first value reports use immediately after loading while the second reports the increase during scoring. dBerkeleyLM is written in Java which requires memory be specified in advance.', 'Timing is based on plentiful memory.', 'Then we ran binary search to determine the least amount of memory with which it would run.', 'The first value reports resident size after loading; the second is the gap between post-loading resident memory and peak virtual memory.', 'The developer explained that the loading process requires extra memory that it then frees. eBased on the ratio to SRI’s speed reported in Guthrie and Hepple (2010) under different conditions.', 'Memory usage is likely much lower than ours. fThe original paper (Germann et al., 2009) provided only 2s of query timing and compared with SRI when it exceeded available RAM.', 'The authors provided us with a ratio between TPT and SRI under different conditions. aLossy compression with the same weights. bLossy compression with retuned weights. ditions make the value appropriate for estimating repeated run times, such as in parameter tuning.', 'Table 2 shows single-threaded results, mostly for comparison to IRSTLM, and Table 3 shows multi-threaded results.', 'Part of the gap between resident and virtual memory is due to the time at which data was collected.', 'Statistics are printed before Moses exits and after parts of the decoder have been destroyed.', 'Moses keeps language models and many other resources in static variables, so these are still resident in memory.', 'Further, we report current resident memory and peak virtual memory because these are the most applicable statistics provided by the kernel.', 'Overall, language modeling significantly impacts decoder performance.', 'In line with perplexity results from Table 1, the PROBING model is the fastest followed by TRIE, and subsequently other packages.', 'We incur some additional memory cost due to storing state in each hypothesis, though this is minimal compared with the size of the model itself.', 'The TRIE model continues to use the least memory of ing (-P) with MAP POPULATE, the default.', 'IRST is not threadsafe.', 'Time for Moses itself to load, including loading the language model and phrase table, is included.', 'Along with locking and background kernel operations such as prefaulting, this explains why wall time is not one-eighth that of the single-threaded case. aLossy compression with the same weights. bLossy compression with retuned weights. the non-lossy options.', 'For RandLM and IRSTLM, the effect of caching can be seen on speed and memory usage.', 'This is most severe with RandLM in the multi-threaded case, where each thread keeps a separate cache, exceeding the original model size.', 'As noted for the perplexity task, we do not expect cache to grow substantially with model size, so RandLM remains a low-memory option.', 'Caching for IRSTLM is smaller at 0.09 GB resident memory, though it supports only a single thread.', 'The BerkeleyLM direct-mapped cache is in principle faster than caches implemented by RandLM and by IRSTLM, so we may write a C++ equivalent implementation as future work.', 'RandLM’s stupid backoff variant stores counts instead of probabilities and backoffs.', 'It also does not prune, so comparing to our pruned model would be unfair.', 'Using RandLM and the documented settings (8-bit values and 1 256 false-positive probability), we built a stupid backoff model on the same data as in Section 5.2.', 'We used this data to build an unpruned ARPA file with IRSTLM’s improved-kneser-ney option and the default three pieces.', 'Table 4 shows the results.', 'We elected run Moses single-threaded to minimize the impact of RandLM’s cache on memory use.', 'RandLM is the clear winner in RAM utilization, but is also slower and lower quality.', 'However, the point of RandLM is to scale to even larger data, compensating for this loss in quality.', 'There any many techniques for improving language model speed and reducing memory consumption.', 'For speed, we plan to implement the direct-mapped cache from BerkeleyLM.', 'Much could be done to further reduce memory consumption.', 'Raj and Whittaker (2003) show that integers in a trie implementation can be compressed substantially.', 'Quantization can be improved by jointly encoding probability and backoff.', 'For even larger models, storing counts (Talbot and Osborne, 2007; Pauls and Klein, 2011; Guthrie and Hepple, 2010) is a possibility.', 'Beyond optimizing the memory size of TRIE, there are alternative data structures such as those in Guthrie and Hepple (2010).', 'Finally, other packages implement language model estimation while we are currently dependent on them to generate an ARPA file.', 'While we have minimized forward-looking state in Section 4.1, machine translation systems could also benefit by minimizing backward-looking state.', 'For example, syntactic decoders (Koehn et al., 2007; Dyer et al., 2010; Li et al., 2009) perform dynamic programming parametrized by both backward- and forward-looking state.', 'If they knew that the first four words in a hypergraph node would never extend to the left and form a 5-gram, then three or even fewer words could be kept in the backward state.', 'This information is readily available in TRIE where adjacent records with equal pointers indicate no further extension of context is possible.', 'Exposing this information to the decoder will lead to better hypothesis recombination.', 'Generalizing state minimization, the model could also provide explicit bounds on probability for both backward and forward extension.', 'This would result in better rest cost estimation and better pruning.10 In general, tighter, but well factored, integration between the decoder and language model should produce a significant speed improvement.', 'We have described two data structures for language modeling that achieve substantial reductions in time and memory cost.', 'The PROBING model is 2.4 times as fast as the fastest alternative, SRILM, and uses less memory too.', 'The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.', 'These performance gains transfer to improved system runtime performance; though we focused on Moses, our code is the best lossless option with cdec and Joshua.', 'We attain these results using several optimizations: hashing, custom lookup tables, bit-level packing, and state for left-to-right query patterns.', 'The code is opensource, has minimal dependencies, and offers both C++ and Java interfaces for integration.', 'Alon Lavie advised on this work.', 'Hieu Hoang named the code “KenLM” and assisted with Moses along with Barry Haddow.', 'Adam Pauls provided a pre-release comparison to BerkeleyLM and an initial Java interface.', 'Nicola Bertoldi and Marcello Federico assisted with IRSTLM.', 'Chris Dyer integrated the code into cdec.', 'Juri Ganitkevitch answered questions about Joshua.', 'This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No.', '0750271 and by the DARPA GALE program.']",extractive -C00-2123,C00-2123,9,4,The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.,"The experimental tests are carried out on the Verbmobil task (GermanEnglish, 8000-word vocabulary), which is a limited-domain spoken-language task.","['Word Re-ordering and DP-based Search in Statistical Machine Translation', 'In this paper, we describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).', 'Starting from a DP-based solution to the traveling salesman problem, we present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃ\x86cient search algorithm.', 'A search restriction especially useful for the translation direction from German to English is presented.', 'The experimental tests are carried out on the Verbmobil task (GermanEnglish, 8000-word vocabulary), which is a limited-domain spoken-language task.', 'The goal of machine translation is the translation of a text given in some source language into a target language.', 'We are given a source string fJ 1 = f1:::fj :::fJ of length J, which is to be translated into a target string eI 1 = e1:::ei:::eI of length I. Among all possible target strings, we will choose the string with the highest probability: ^eI 1 = arg max eI 1 fPr(eI 1jfJ 1 )g = arg max eI 1 fPr(eI 1) Pr(fJ 1 jeI 1)g : (1) The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language.', 'Pr(eI 1) is the language model of the target language, whereas Pr(fJ 1 jeI1) is the transla tion model.', 'Our approach uses word-to-word dependencies between source and target words.', 'The model is often further restricted so that each source word is assigned to exactly one target word (Brown et al., 1993; Ney et al., 2000).', 'These alignment models are similar to the concept of hidden Markov models (HMM) in speech recognition.', 'The alignment mapping is j ! i = aj from source position j to target position i = aj . The use of this alignment model raises major problems if a source word has to be aligned to several target words, e.g. when translating German compound nouns.', 'A simple extension will be used to handle this problem.', 'In Section 2, we brie y review our approach to statistical machine translation.', 'In Section 3, we introduce our novel concept to word reordering and a DP-based search, which is especially suitable for the translation direction from German to English.', 'This approach is compared to another reordering scheme presented in (Berger et al., 1996).', 'In Section 4, we present the performance measures used and give translation results on the Verbmobil task.', 'In this section, we brie y review our translation approach.', 'In Eq.', '(1), Pr(eI 1) is the language model, which is a trigram language model in this case.', 'For the translation model Pr(fJ 1 jeI 1), we go on the assumption that each source word is aligned to exactly one target word.', 'The alignment model uses two kinds of parameters: alignment probabilities p(aj jajô\x80\x80\x801; I; J), where the probability of alignment aj for position j depends on the previous alignment position ajô\x80\x80\x801 (Ney et al., 2000) and lexicon probabilities p(fj jeaj ).', 'When aligning the words in parallel texts (for language pairs like SpanishEnglish, French-English, ItalianGerman,...), we typically observe a strong localization effect.', 'In many cases, there is an even stronger restriction: over large portions of the source string, the alignment is monotone.', '2.1 Inverted Alignments.', 'To explicitly handle the word reordering between words in source and target language, we use the concept of the so-called inverted alignments as given in (Ney et al., 2000).', 'An inverted alignment is defined as follows: inverted alignment: i ! j = bi: Target positions i are mapped to source positions bi.', ""What is important and is not expressed by the notation is the so-called coverage constraint: each source position j should be 'hit' exactly once by the path of the inverted alignment bI 1 = b1:::bi:::bI . Using the inverted alignments in the maximum approximation, we obtain as search criterion: max I (p(JjI) max eI 1 ( I Yi=1 p(eijeiô\x80\x80\x801 iô\x80\x80\x802) max bI 1 I Yi=1 [p(bijbiô\x80\x80\x801; I; J) p(fbi jei)])) = = max I (p(JjI) max eI 1;bI 1 ( I Yi=1 p(eijeiô\x80\x80\x801 iô\x80\x80\x802) p(bijbiô\x80\x80\x801; I; J) p(fbi jei)])); where the two products over i have been merged into a single product over i. p(eijeiô\x80\x80\x801 iô\x80\x80\x802) is the trigram language model probability."", 'The inverted alignment probability p(bijbiô\x80\x80\x801; I; J) and the lexicon probability p(fbi jei) are obtained by relative frequency estimates from the Viterbi alignment path after the final training iteration.', 'The details are given in (Och and Ney, 2000).', 'The sentence length probability p(JjI) is omitted without any loss in performance.', 'For the inverted alignment probability p(bijbiô\x80\x80\x801; I; J), we drop the dependence on the target sentence length I. 2.2 Word Joining.', ""The baseline alignment model does not permit that a source word is aligned to two or more target words, e.g. for the translation direction from German toEnglish, the German compound noun 'Zahnarztter min' causes problems, because it must be translated by the two target words dentist's appointment."", 'We use a solution to this problem similar to the one presented in (Och et al., 1999), where target words are joined during training.', 'The word joining is done on the basis of a likelihood criterion.', 'An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.', ""E.g. when 'Zahnarzttermin' is aligned to dentist's, the extended lexicon model might learn that 'Zahnarzttermin' actuallyhas to be aligned to both dentist's and ap pointment."", 'In the following, we assume that this word joining has been carried out.', 'Machine Translation In this case my colleague can not visit on I n d i e s e m F a l l ka nn m e i n K o l l e g e a m the v i e r t e n M a i n i c h t b e s u c h e n S i e you fourth of May Figure 1: Reordering for the German verbgroup.', 'In order to handle the necessary word reordering as an optimization problem within our dynamic programming approach, we describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming (Held, Karp, 1962).', 'The traveling salesman problem is an optimization problem which is defined as follows: given are a set of cities S = s1; ; sn and for each pair of cities si; sj the cost dij > 0 for traveling from city si to city sj . We are looking for the shortest tour visiting all cities exactly once while starting and ending in city s1.', 'A straightforward way to find the shortest tour is by trying all possible permutations of the n cities.', 'The resulting algorithm has a complexity of O(n!).', 'However, dynamic programming can be used to find the shortest tour in exponential time, namely in O(n22n), using the algorithm by Held and Karp.', 'The approach recursively evaluates a quantity Q(C; j), where C is the set of already visited cities and sj is the last visited city.', 'Subsets C of increasing cardinality c are processed.', 'The algorithm works due to the fact that not all permutations of cities have to be considered explicitly.', 'For a given partial hypothesis (C; j), the order in which the cities in C have been visited can be ignored (except j), only the score for the best path reaching j has to be stored.', 'This algorithm can be applied to statistical machine translation.', 'Using the concept of inverted alignments, we explicitly take care of the coverage constraint by introducing a coverage set C of source sentence positions that have been already processed.', 'The advantage is that we can recombine search hypotheses by dynamic programming.', 'The cities of the traveling salesman problem correspond to source Table 1: DP algorithm for statistical machine translation.', 'input: source string f1:::fj :::fJ initialization for each cardinality c = 1; 2; ; J do for each pair (C; j), where j 2 C and jCj = c do for each target word e 2 E Qe0 (e; C; j) = p(fj je) max Ã\x86;e00 j02Cnfjg fp(jjj0; J) p(Ã\x86) pÃ\x86(eje0; e00) Qe00 (e0;C n fjg; j0)g words fj in the input string of length J. For the final translation each source position is considered exactly once.', 'Subsets of partial hypotheses with coverage sets C of increasing cardinality c are processed.', 'For a trigram language model, the partial hypotheses are of the form (e0; e; C; j).', 'e0; e are the last two target words, C is a coverage set for the already covered source positions and j is the last position visited.', 'Each distance in the traveling salesman problem now corresponds to the negative logarithm of the product of the translation, alignment and language model probabilities.', 'The following auxiliary quantity is defined: Qe0 (e; C; j) := probability of the best partial hypothesis (ei 1; bi 1), where C = fbkjk = 1; ; ig, bi = j, ei = e and eiô\x80\x80\x801 = e0.', 'The type of alignment we have considered so far requires the same length for source and target sentence, i.e. I = J. Evidently, this is an unrealistic assumption, therefore we extend the concept of inverted alignments as follows: When adding a new position to the coverage set C, we might generate either Ã\x86 = 0 or Ã\x86 = 1 new target words.', 'For Ã\x86 = 1, a new target language word is generated using the trigram language model p(eje0; e00).', 'For Ã\x86 = 0, no new target word is generated, while an additional source sentence position is covered.', 'A modified language model probability pÃ\x86(eje0; e00) is defined as follows: pÃ\x86(eje0; e00) = 1:0 if Ã\x86 = 0 p(eje0; e00) if Ã\x86 = 1 : We associate a distribution p(Ã\x86) with the two cases Ã\x86 = 0 and Ã\x86 = 1 and set p(Ã\x86 = 1) = 0:7.', 'The above auxiliary quantity satisfies the following recursive DP equation: Qe0 (e; C; j) = Initial Skip Verb Final 1.', 'In.', '2.', 'diesem 3.', 'Fall.', '4.', 'mein 5.', 'Kollege.', '6.', 'kann 7.nicht 8.', 'besuchen 9.', 'Sie.', '10.', 'am 11.', 'vierten 12.', 'Mai.', '13.', 'Figure 2: Order in which source positions are visited for the example given in Fig.1.', '= p(fj je) max Ã\x86;e00 j02Cnfjg np(jjj0; J) p(Ã\x86) pÃ\x86(eje0; e00) Qe00 (e0;C n fjg; j 0 )o: The DP equation is evaluated recursively for each hypothesis (e0; e; C; j).', 'The resulting algorithm is depicted in Table 1.', 'The complexity of the algorithm is O(E3 J2 2J), where E is the size of the target language vocabulary.', '3.1 Word ReOrdering with Verbgroup.', 'Restrictions: Quasi-monotone Search The above search space is still too large to allow the translation of a medium length input sentence.', 'On the other hand, only very restricted reorderings are necessary, e.g. for the translation direction from Table 2: Coverage set hypothesis extensions for the IBM reordering.', 'No: Predecessor coverage set Successor coverage set 1 (f1; ;mg n flg ; l0) !', '(f1; ;mg ; l) 2 (f1; ;mg n fl; l1g ; l0) !', '(f1; ;mg n fl1g ; l) 3 (f1; ;mg n fl; l1; l2g ; l0) !', '(f1; ;mg n fl1; l2g ; l) 4 (f1; ;m ô\x80\x80\x80 1g n fl1; l2; l3g ; l0) !', '(f1; ;mg n fl1; l2; l3g ;m) German to English the monotonicity constraint is violated mainly with respect to the German verbgroup.', 'In German, the verbgroup usually consists of a left and a right verbal brace, whereas in English the words of the verbgroup usually form a sequence of consecutive words.', 'Our new approach, which is called quasi-monotone search, processes the source sentence monotonically, while explicitly taking into account the positions of the German verbgroup.', 'A typical situation is shown in Figure 1.', ""When translating the sentence monotonically from left to right, the translation of the German finite verb 'kann', which is the left verbal brace in this case, is postponed until the German noun phrase 'mein Kollege' is translated, which is the subject of the sentence."", ""Then, the German infinitive 'besuchen' and the negation particle 'nicht' are translated."", 'The translation of one position in the source sentence may be postponed for up to L = 3 source positions, and the translation of up to two source positions may be anticipated for at most R = 10 source positions.', 'To formalize the approach, we introduce four verbgroup states S: Initial (I): A contiguous, initial block of source positions is covered.', 'Skipped (K): The translation of up to one word may be postponed . Verb (V): The translation of up to two words may be anticipated.', 'Final (F): The rest of the sentence is processed monotonically taking account of the already covered positions.', 'While processing the source sentence monotonically, the initial state I is entered whenever there are no uncovered positions to the left of the rightmost covered position.', 'The sequence of states needed to carry out the word reordering example in Fig.', '1 is given in Fig.', '2.', 'The 13 positions of the source sentence are processed in the order shown.', 'A position is presented by the word at that position.', 'Using these states, we define partial hypothesis extensions, which are of the following type: (S0;C n fjg; j0) !', '(S; C; j); Not only the coverage set C and the positions j; j0, but also the verbgroup states S; S0 are taken into account.', 'To be short, we omit the target words e; e0 in the formulation of the search hypotheses.', 'There are 13 types of extensions needed to describe the verbgroup reordering.', 'The details are given in (Tillmann, 2000).', 'For each extension a new position is added to the coverage set.', 'Covering the first uncovered position in the source sentence, we use the language model probability p(ej$; $).', 'Here, $ is the sentence boundary symbol, which is thought to be at position 0 in the target sentence.', 'The search starts in the hypothesis (I; f;g; 0).', 'f;g denotes the empty set, where no source sentence position is covered.', 'The following recursive equation is evaluated: Qe0 (e; S; C; j) = (2) = p(fj je) max Ã\x86;e00 np(jjj0; J) p(Ã\x86) pÃ\x86(eje0; e00) max (S0;j0) (S0 ;Cnfjg;j0)!(S;C;j) j02Cnfjg Qe00 (e0; S0;C n fjg; j0)o: The search ends in the hypotheses (I; f1; ; Jg; j).', 'f1; ; Jg denotes a coverage set including all positions from the starting position 1 to position J and j 2 fJ ô\x80\x80\x80L; ; Jg.', 'The final score is obtained from: max e;e0 j2fJô\x80\x80\x80L;;Jg p($je; e0) Qe0 (e; I; f1; ; Jg; j); where p($je; e0) denotes the trigram language model, which predicts the sentence boundary $ at the end of the target sentence.', 'The complexity of the quasimonotone search is O(E3 J (R2+LR)).', 'The proof is given in (Tillmann, 2000).', '3.2 Reordering with IBM Style.', 'Restrictions We compare our new approach with the word reordering used in the IBM translation approach (Berger et al., 1996).', 'A detailed description of the search procedure used is given in this patent.', 'Source sentence words are aligned with hypothesized target sentence words, where the choice of a new source word, which has not been aligned with a target word yet, is restricted1.', 'A procedural definition to restrict1In the approach described in (Berger et al., 1996), a mor phological analysis is carried out and word morphemes rather than full-form words are used during the search.', 'Here, we process only full-form words within the translation procedure.', 'the number of permutations carried out for the word reordering is given.', 'During the search process, a partial hypothesis is extended by choosing a source sentence position, which has not been aligned with a target sentence position yet.', 'Only one of the first n positions which are not already aligned in a partial hypothesis may be chosen, where n is set to 4.', 'The restriction can be expressed in terms of the number of uncovered source sentence positions to the left of the rightmost position m in the coverage set.', 'This number must be less than or equal to n ô\x80\x80\x80 1.', 'Otherwise for the predecessor search hypothesis, we would have chosen a position that would not have been among the first n uncovered positions.', 'Ignoring the identity of the target language words e and e0, the possible partial hypothesis extensions due to the IBM restrictions are shown in Table 2.', 'In general, m; l; l0 6= fl1; l2; l3g and in line umber 3 and 4, l0 must be chosen not to violate the above reordering restriction.', 'Note that in line 4 the last visited position for the successor hypothesis must be m. Otherwise , there will be four uncovered positions for the predecessor hypothesis violating the restriction.', 'A dynamic programming recursion similar to the one in Eq. 2 is evaluated.', 'In this case, we have no finite-state restrictions for the search space.', 'The search starts in hypothesis (f;g; 0) and ends in the hypotheses (f1; ; Jg; j), with j 2 f1; ; Jg.', 'This approach leads to a search procedure with complexity O(E3 J4).', 'The proof is given in (Tillmann, 2000).', '4.1 The Task and the Corpus.', 'We have tested the translation system on the Verbmobil task (Wahlster 1993).', 'The Verbmobil task is an appointment scheduling task.', 'Two subjects are each given a calendar and they are asked to schedule a meeting.', 'The translation direction is from German to English.', 'A summary of the corpus used in the experiments is given in Table 3.', 'The perplexity for the trigram language model used is 26:5.', 'Although the ultimate goal of the Verbmobil project is the translation of spoken language, the input used for the translation experiments reported on in this paper is the (more or less) correct orthographic transcription of the spoken sentences.', 'Thus, the effects of spontaneous speech are present in the corpus, e.g. the syntactic structure of the sentence is rather less restricted, however the effect of speech recognition errors is not covered.', 'For the experiments, we use a simple preprocessing step.', 'German city names are replaced by category markers.', 'The translation search is carried out with the category markers and the city names are resubstituted into the target sentence as a postprocessing step.', 'Table 3: Training and test conditions for the Verbmobil task (*number of words without punctuation marks).', 'German English Training: Sentences 58 073 Words 519 523 549 921 Words* 418 979 453 632 Vocabulary Size 7939 4648 Singletons 3454 1699 Test-147: Sentences 147 Words 1 968 2 173 Perplexity { 26:5 Table 4: Multi-reference word error rate (mWER) and subjective sentence error rate (SSER) for three different search procedures.', 'Search CPU time mWER SSER Method [sec] [%] [%] MonS 0:9 42:0 30:5 QmS 10:6 34:4 23:8 IbmS 28:6 38:2 26:2 4.2 Performance Measures.', 'The following two error criteria are used in our experiments: mWER: multi-reference WER: We use the Levenshtein distance between the automatic translation and several reference translations as a measure of the translation errors.', 'On average, 6 reference translations per automatic translation are available.', 'The Levenshtein distance between the automatic translation and each of the reference translations is computed, and the minimum Levenshtein distance is taken.', 'This measure has the advantage of being completely automatic.', 'SSER: subjective sentence error rate: For a more detailed analysis, the translations are judged by a human test person.', 'For the error counts, a range from 0:0 to 1:0 is used.', 'An error count of 0:0 is assigned to a perfect translation, and an error count of 1:0 is assigned to a semantically and syntactically wrong translation.', '4.3 Translation Experiments.', 'For the translation experiments, Eq. 2 is recursively evaluated.', 'We apply a beam search concept as in speech recognition.', 'However there is no global pruning.', 'Search hypotheses are processed separately according to their coverage set C. The best scored hypothesis for each coverage set is computed: QBeam(C) = max e;e0 ;S;j Qe0 (e; S; C; j) The hypothesis (e0; e; S; C; j) is pruned if: Qe0 (e; S; C; j) < t0 QBeam(C); where t0 is a threshold to control the number of surviving hypotheses.', 'Additionally, for a given coverage set, at most 250 different hypotheses are kept during the search process, and the number of different words to be hypothesized by a source word is limited.', 'For each source word f, the list of its possible translations e is sorted according to p(fje) puni(e), where puni(e) is the unigram probability of the English word e. It is suÃ\x86cient to consider only the best 50 words.', 'We show translation results for three approaches: the monotone search (MonS), where no word reordering is allowed (Tillmann, 1997), the quasimonotone search (QmS) as presented in this paper and the IBM style (IbmS) search as described in Section 3.2.', 'Table 4 shows translation results for the three approaches.', 'The computing time is given in terms of CPU time per sentence (on a 450MHz PentiumIIIPC).', 'Here, the pruning threshold t0 = 10:0 is used.', 'Translation errors are reported in terms of multireference word error rate (mWER) and subjective sentence error rate (SSER).', 'The monotone search performs worst in terms of both error rates mWER and SSER.', 'The computing time is low, since no reordering is carried out.', 'The quasi-monotone search performs best in terms of both error rates mWER and SSER.', 'Additionally, it works about 3 times as fast as the IBM style search.', 'For our demonstration system, we typically use the pruning threshold t0 = 5:0 to speed up the search by a factor 5 while allowing for a small degradation in translation accuracy.', 'The effect of the pruning threshold t0 is shown in Table 5.', 'The computing time, the number of search errors, and the multi-reference WER (mWER) are shown as a function of t0.', 'The negative logarithm of t0 is reported.', 'The translation scores for the hypotheses generated with different threshold values t0 are compared to the translation scores obtained with a conservatively large threshold t0 = 10:0 . For each test series, we count the number of sentences whose score is worse than the corresponding score of the test series with the conservatively large threshold t0 = 10:0, and this number is reported as the number of search errors.', 'Depending on the threshold t0, the search algorithm may miss the globally optimal path which typically results in additional translation errors.', 'Decreasing the threshold results in higher mWER due to additional search errors.', 'Table 5: Effect of the beam threshold on the number of search errors (147 sentences).', 'Search t0 CPU time #search mWER Method [sec] error [%] QmS 0.0 0.07 108 42:6 1.0 0.13 85 37:8 2.5 0.35 44 36:6 5.0 1.92 4 34:6 10.0 10.6 0 34:5 IbmS 0.0 0.14 108 43:4 1.0 0.3 84 39:5 2.5 0.8 45 39:1 5.0 4.99 7 38:3 10.0 28.52 0 38:2 Table 6 shows example translations obtained by the three different approaches.', 'Again, the monotone search performs worst.', 'In the second and third translation examples, the IbmS word reordering performs worse than the QmS word reordering, since it can not take properly into account the word reordering due to the German verbgroup.', ""The German finite verbs 'bin' (second example) and 'k\x7fonnten' (third example) are too far away from the personal pronouns 'ich' and 'Sie' (6 respectively 5 source sentence positions)."", 'In the last example, the less restrictive IbmS word reordering leads to a better translation, although the QmS translation is still acceptable.', 'In this paper, we have presented a new, eÃ\x86cient DP-based search procedure for statistical machine translation.', 'The approach assumes that the word reordering is restricted to a few positions in the source sentence.', 'The approach has been successfully tested on the 8 000-word Verbmobil task.', 'Future extensions of the system might include: 1) An extended translation model, where we use more context to predict a source word.', '2) An improved language model, which takes into account syntactic structure, e.g. to ensure that a proper English verbgroup is generated.', '3) A tight coupling with the speech recognizer output.', 'This work has been supported as part of the Verbmobil project (contract number 01 IV 601 A) by the German Federal Ministry of Education, Science, Research and Technology and as part of the Eutrans project (ESPRIT project number 30268) by the European Community.', 'Table 6: Example Translations for the Verbmobil task.', 'Input: Ja , wunderbar . K\x7fonnen wir machen . MonS: Yes, wonderful.', 'Can we do . QmS: Yes, wonderful.', 'We can do that . IbmS: Yes, wonderful.', 'We can do that . Input: Das ist zu knapp , weil ich ab dem dritten in Kaiserslautern bin . Genaugenommen nur am dritten . Wie w\x7fare es denn am \x7fahm Samstag , dem zehnten Februar ? MonS: That is too tight , because I from the third in Kaiserslautern . In fact only on the third . How about \x7fahm Saturday , the tenth of February ? QmS: That is too tight , because I am from the third in Kaiserslautern . In fact only on the third . \x7fAhm how about Saturday , February the tenth ? IbmS: That is too tight , from the third because I will be in Kaiserslautern . In fact only on the third . \x7fAhm how about Saturday , February the tenth ? Input: Wenn Sie dann noch den siebzehnten k\x7fonnten , w\x7fare das toll , ja . MonS: If you then also the seventeenth could , would be the great , yes . QmS: If you could then also the seventeenth , that would be great , yes . IbmS: Then if you could even take seventeenth , that would be great , yes . Input: Ja , das kommt mir sehr gelegen . Machen wir es dann am besten so . MonS: Yes , that suits me perfectly . Do we should best like that . QmS: Yes , that suits me fine . We do it like that then best . IbmS: Yes , that suits me fine . We should best do it like that .']",extractive -D10-1083,D10-1083,1,14,"In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.",These sequence models-based approaches commonly treat token-level tag assignment as the primary latent variable.,"['Simple Type-Level Unsupervised POS Tagging', 'Part-of-speech (POS) tag distributions are known to exhibit sparsity â\x80\x94 a word is likely to take a single predominant tag in a corpus.', 'Recent research has demonstrated that incorporating this sparsity constraint improves tagging accuracy.', 'However, in existing systems, this expansion come with a steep increase in model complexity.', 'This paper proposes a simple and effective tagging method that directly models tag sparsity and other distributional properties of valid POS tag assignments.', 'In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training.', 'Our experiments consistently demonstrate that this model architecture yields substantial performance gains over more complex tagging counterparts.', 'On several languages, we report performance exceeding that of more complex state-of-the art systems.1', 'Since the early days of statistical NLP, researchers have observed that a part-of-speech tag distribution exhibits â\x80\x9cone tag per discourseâ\x80\x9d sparsity â\x80\x94 words are likely to select a single predominant tag in a corpus, even when several tags are possible.', 'Simply assigning to each word its most frequent associated tag in a corpus achieves 94.6% accuracy on the WSJ portion of the Penn Treebank.', 'This distributional sparsity of syntactic tags is not unique to English 1 The source code for the work presented in this paper is available at http://groups.csail.mit.edu/rbg/code/typetagging/.', 'â\x80\x94 similar results have been observed across multiple languages.', 'Clearly, explicitly modeling such a powerful constraint on tagging assignment has a potential to significantly improve the accuracy of an unsupervised part-of-speech tagger learned without a tagging dictionary.', 'In practice, this sparsity constraint is difficult to incorporate in a traditional POS induction system (Me´rialdo, 1994; Johnson, 2007; Gao and Johnson, 2008; Grac¸a et al., 2009; Berg-Kirkpatrick et al., 2010).', 'These sequence models-based approaches commonly treat token-level tag assignment as the primary latent variable.', 'By design, they readily capture regularities at the token-level.', 'However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.', 'Previous work has attempted to incorporate such constraints into token-level models via heavy-handed modifications to inference procedure and objective function (e.g., posterior regularization and ILP decoding) (Grac¸a et al., 2009; Ravi and Knight, 2009).', 'In most cases, however, these expansions come with a steep increase in model complexity, with respect to training procedure and inference time.', 'In this work, we take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.', 'The model starts by generating a tag assignment for each word type in a vocabulary, assuming one tag per word.', 'Then, token- level HMM emission parameters are drawn conditioned on these assignments such that each word is only allowed probability mass on a single assigned tag.', 'In this way we restrict the parameterization of a Language Original case English Danish Dutch German Spanish Swedish Portuguese 94.6 96.3 96.6 95.5 95.4 93.3 95.6 Table 1: Upper bound on tagging accuracy assuming each word type is assigned to majority POS tag.', 'Across all languages, high performance can be attained by selecting a single tag per word type.', 'token-level HMM to reflect lexicon sparsity.', 'This model admits a simple Gibbs sampling algorithm where the number of latent variables is proportional to the number of word types, rather than the size of a corpus as for a standard HMM sampler (Johnson, 2007).', 'There are two key benefits of this model architecture.', 'First, it directly encodes linguistic intuitions about POS tag assignments: the model structure reflects the one-tag-per-word property, and a type- level tag prior captures the skew on tag assignments (e.g., there are fewer unique determiners than unique nouns).', 'Second, the reduced number of hidden variables and parameters dramatically speeds up learning and inference.', 'We evaluate our model on seven languages exhibiting substantial syntactic variation.', 'On several languages, we report performance exceeding that of state-of-the art systems.', 'Our analysis identifies three key factors driving our performance gain: 1) selecting a model structure which directly encodes tag sparsity, 2) a type-level prior on tag assignments, and 3) a straightforward na¨ıveBayes approach to incorporate features.', 'The observed performance gains, coupled with the simplicity of model implementation, makes it a compelling alternative to existing more complex counterparts.', 'Recent work has made significant progress on unsupervised POS tagging (Me´rialdo, 1994; Smith and Eisner, 2005; Haghighi and Klein, 2006; Johnson,2007; Goldwater and Griffiths, 2007; Gao and John son, 2008; Ravi and Knight, 2009).', 'Our work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.', 'This line of work has been motivated by empirical findings that the standard EM-learned unsupervised HMM does not exhibit sufficient word tag sparsity.', 'The extent to which this constraint is enforced varies greatly across existing methods.', 'On one end of the spectrum are clustering approaches that assign a single POS tag to each word type (Schutze, 1995; Lamar et al., 2010).', 'These clusters are computed using an SVD variant without relying on transitional structure.', 'While our method also enforces a singe tag per word constraint, it leverages the transition distribution encoded in an HMM, thereby benefiting from a richer representation of context.', 'Other approaches encode sparsity as a soft constraint.', 'For instance, by altering the emission distribution parameters, Johnson (2007) encourages the model to put most of the probability mass on few tags.', 'This design does not guarantee â\x80\x9cstructural zeros,â\x80\x9d but biases towards sparsity.', 'A more forceful approach for encoding sparsity is posterior regularization, which constrains the posterior to have a small number of expected tag assignments (Grac¸a et al., 2009).', 'This approach makes the training objective more complex by adding linear constraints proportional to the number of word types, which is rather prohibitive.', 'A more rigid mechanism for modeling sparsity is proposed by Ravi and Knight (2009), who minimize the size of tagging grammar as measured by the number of transition types.', 'The use of ILP in learning the desired grammar significantly increases the computational complexity of this method.', 'In contrast to these approaches, our method directly incorporates these constraints into the structure of the model.', 'This design leads to a significant reduction in the computational complexity of training and inference.', 'Another thread of relevant research has explored the use of features in unsupervised POS induction (Smith and Eisner, 2005; Berg-Kirkpatrick et al., 2010; Hasan and Ng, 2009).', 'These methods demonstrated the benefits of incorporating linguistic features using a log-linear parameterization, but requires elaborate machinery for training.', 'In our work, we demonstrate that using a simple na¨ıveBayes approach also yields substantial performance gains, without the associated training complexity.', 'We consider the unsupervised POS induction problem without the use of a tagging dictionary.', 'A graphical depiction of our model as well as a summary of random variables and parameters can be found in Figure 1.', 'As is standard, we use a fixed constant K for the number of tagging states.', 'Model Overview The model starts by generating a tag assignment T for each word type in a vocabulary, assuming one tag per word.', 'Conditioned on T , features of word types W are drawn.', 'We refer to (T , W ) as the lexicon of a language and Ï\x88 for the parameters for their generation; Ï\x88 depends on a single hyperparameter β.', 'Once the lexicon has been drawn, the model proceeds similarly to the standard token-level HMM: Emission parameters θ are generated conditioned on tag assignments T . We also draw transition parameters Ï\x86.', 'Both parameters depend on a single hyperparameter α.', 'Once HMM parameters (θ, Ï\x86) are drawn, a token-level tag and word sequence, (t, w), is generated in the standard HMM fashion: a tag sequence t is generated from Ï\x86.', 'The corresponding token words w are drawn conditioned on t and θ.2 Our full generative model is given by: K P (Ï\x86, θ|T , α, β) = n (P (Ï\x86t|α)P (θt|T , α)) t=1 The transition distribution Ï\x86t for each tag t is drawn according to DIRICHLET(α, K ), where α is the shared transition and emission distribution hyperparameter.', 'In total there are O(K 2) parameters associated with the transition parameters.', 'In contrast to the Bayesian HMM, θt is not drawn from a distribution which has support for each of the n word types.', 'Instead, we condition on the type-level tag assignments T . Specifically, let St = {i|Ti = t} denote the indices of theword types which have been assigned tag t accord ing to the tag assignments T . Then θt is drawn from DIRICHLET(α, St), a symmetric Dirichlet which only places mass on word types indicated by St. This ensures that each word will only be assigned a single tag at inference time (see Section 4).', 'Note that while the standard HMM, has O(K n) emission parameters, our model has O(n) effective parameters.3 Token Component Once HMM parameters (Ï\x86, θ) have been drawn, the HMM generates a token-level corpus w in the standard way: P (w, t|Ï\x86, θ) = P (T , W , θ, Ï\x88, Ï\x86, t, w|α, β) = P (T , W , Ï\x88|β) [Lexicon]  n n ï£\xad (w,t)â\x88\x88(w,t) j  P (tj |Ï\x86tjâ\x88\x921 )P (wj |tj , θtj ) P (Ï\x86, θ|T , α, β) [Parameter] P (w, t|Ï\x86, θ) [Token] We refer to the components on the right hand side as the lexicon, parameter, and token component respectively.', 'Since the parameter and token components will remain fixed throughout experiments, we briefly describe each.', 'Parameter Component As in the standard Bayesian HMM (Goldwater and Griffiths, 2007), all distributions are independently drawn from symmetric Dirichlet distributions: 2 Note that t and w denote tag and word sequences respectively, rather than individual tokens or tags.', 'Note that in our model, conditioned on T , there is precisely one t which has nonzero probability for the token component, since for each word, exactly one θt has support.', '3.1 Lexicon Component.', 'We present several variations for the lexical component P (T , W |Ï\x88), each adding more complex pa rameterizations.', 'Uniform Tag Prior (1TW) Our initial lexicon component will be uniform over possible tag assignments as well as word types.', 'Its only purpose is 3 This follows since each θt has St â\x88\x92 1 parameters and.', 'P St = n. β T VARIABLES Ï\x88 Y W : Word types (W1 ,.', '.., Wn ) (obs) P T : Tag assigns (T1 ,.', '.., Tn ) T W Ï\x86 E w : Token word seqs (obs) t : Token tag assigns (det by T ) PARAMETERS Ï\x88 : Lexicon parameters θ : Token word emission parameters Ï\x86 : Token tag transition parameters Ï\x86 Ï\x86 t1 t2 θ θ w1 w2 K Ï\x86 T tm O K θ E wN m N N Figure 1: Graphical depiction of our model and summary of latent variables and parameters.', 'The type-level tag assignments T generate features associated with word types W . The tag assignments constrain the HMM emission parameters θ.', 'The tokens w are generated by token-level tags t from an HMM parameterized by the lexicon structure.', 'The hyperparameters α and β represent the concentration parameters of the token- and type-level components of the model respectively.', 'They are set to fixed constants.', 'to explore how well we can induce POS tags using only the one-tag-per-word constraint.', 'Specifically, the lexicon is generated as: P (T , W |Ï\x88) =P (T )P (W |T ) Word Type Features (FEATS): Past unsupervised POS work have derived benefits from features on word types, such as suffix and capitalization features (Hasan and Ng, 2009; Berg-Kirkpatrick et al.,2010).', 'Past work however, has typically associ n = n P (Ti)P (Wi|Ti) = i=1 1 n K n ated these features with token occurrences, typically in an HMM.', 'In our model, we associate these features at the type-level in the lexicon.', 'Here, we conThis model is equivalent to the standard HMM ex cept that it enforces the one-word-per-tag constraint.', 'Learned Tag Prior (PRIOR) We next assume there exists a single prior distribution Ï\x88 over tag assignments drawn from DIRICHLET(β, K ).', 'This alters generation of T as follows: n P (T |Ï\x88) = n P (Ti|Ï\x88) i=1 Note that this distribution captures the frequency of a tag across word types, as opposed to tokens.', 'The P (T |Ï\x88) distribution, in English for instance, should have very low mass for the DT (determiner) tag, since determiners are a very small portion of the vocabulary.', 'In contrast, NNP (proper nouns) form a large portion of vocabulary.', 'Note that these observa sider suffix features, capitalization features, punctuation, and digit features.', 'While possible to utilize the feature-based log-linear approach described in Berg-Kirkpatrick et al.', '(2010), we adopt a simpler na¨ıve Bayes strategy, where all features are emitted independently.', 'Specifically, we assume each word type W consists of feature-value pairs (f, v).', 'For each feature type f and tag t, a multinomial Ï\x88tf is drawn from a symmetric Dirichlet distribution with concentration parameter β.', 'The P (W |T , Ï\x88) term in the lexicon component now decomposes as: n P (W |T , Ï\x88) = n P (Wi|Ti, Ï\x88) i=1 n   tions are not modeled by the standard HMM, which = n ï£\xad n P (v|Ï\x88Ti f ) instead can model token-level frequency.', 'i=1 (f,v)â\x88\x88Wi', 'For inference, we are interested in the posterior probability over the latent variables in our model.', 'During training, we treat as observed the language word types W as well as the token-level corpus w. We utilize Gibbs sampling to approximate our collapsed model posterior: P (T ,t|W , w, α, β) â\x88\x9d P (T , t, W , w|α, β) 0.7 0.6 0.5 0.4 0.3 English Danish Dutch Germany Portuguese Spanish Swedish = P (T , t, W , w, Ï\x88, θ, Ï\x86, w|α, β)dÏ\x88dθdÏ\x86 Note that given tag assignments T , there is only one setting of token-level tags t which has mass in the above posterior.', 'Specifically, for the ith word type, the set of token-level tags associated with token occurrences of this word, denoted t(i), must all take the value Ti to have nonzero mass. Thus in the context of Gibbs sampling, if we want to block sample Ti with t(i), we only need sample values for Ti and consider this setting of t(i).', 'The equation for sampling a single type-level assignment Ti is given by, 0.2 0 5 10 15 20 25 30 Iteration Figure 2: Graph of the one-to-one accuracy of our full model (+FEATS) under the best hyperparameter setting by iteration (see Section 5).', 'Performance typically stabilizes across languages after only a few number of iterations.', 'to represent the ith word type emitted by the HMM: P (t(i)|Ti, t(â\x88\x92i), w, α) â\x88\x9d n P (w|Ti, t(â\x88\x92i), w(â\x88\x92i), α) (tb ,ta ) P (Ti, t(i)|T , W , t(â\x88\x92i), w, α, β) = P (T |tb, t(â\x88\x92i), α)P (ta|T , t(â\x88\x92i), α) â\x88\x92i (i) i i (â\x88\x92i) P (Ti|W , T â\x88\x92i, β)P (t |Ti, t , w, α) All terms are Dirichlet distributions and parameters can be analytically computed from counts in t(â\x88\x92i)where T â\x88\x92i denotes all type-level tag assignment ex cept Ti and t(â\x88\x92i) denotes all token-level tags except and w (â\x88\x92i) (Johnson, 2007).', 't(i).', 'The terms on the right-hand-side denote the type-level and token-level probability terms respectively.', 'The type-level posterior term can be computed according to, P (Ti|W , T â\x88\x92i, β) â\x88\x9d Note that each round of sampling Ti variables takes time proportional to the size of the corpus, as with the standard token-level HMM.', 'A crucial difference is that the number of parameters is greatly reduced as is the number of variables that are sampled during each iteration.', 'In contrast to results reported in Johnson (2007), we found that the per P (Ti|T â\x88\x92i, β) n (f,v)â\x88\x88Wi P (v|Ti, f, W â\x88\x92i, T â\x88\x92i, β) formance of our Gibbs sampler on the basic 1TW model stabilized very quickly after about 10 full it All of the probabilities on the right-hand-side are Dirichlet, distributions which can be computed analytically given counts.', 'The token-level term is similar to the standard HMM sampling equations found in Johnson (2007).', 'The relevant variables are the set of token-level tags that appear before and after each instance of the ith word type; we denote these context pairs with the set {(tb, ta)} and they are contained in t(â\x88\x92i).', 'We use w erations of sampling (see Figure 2 for a depiction).', 'We evaluate our approach on seven languages: English, Danish, Dutch, German, Portuguese, Spanish, and Swedish.', 'On each language we investigate the contribution of each component of our model.', 'For all languages we do not make use of a tagging dictionary.', 'Mo del Hy per par am . E n g li s h1 1 m-1 D a n i s h1 1 m-1 D u t c h1 1 m-1 G er m a n1 1 m-1 Por tug ues e1 1 m-1 S p a ni s h1 1 m-1 S w e di s h1 1 m-1 1T W be st me dia n 45.', '2 62.6 45.', '1 61.7 37.', '2 56.2 32.', '1 53.8 47.', '4 53.7 43.', '9 61.0 44.', '2 62.2 39.', '3 68.4 49.', '0 68.4 48.', '5 68.1 34.', '3 54.4 33.', '36.', '0 55.3 34.', '9 50.2 +P RI OR be st me dia n 47.', '9 65.5 46.', '5 64.7 42.', '3 58.3 40.', '0 57.3 51.', '4 65.9 48.', '3 60.7 50.', '41.', '7 68.3 56.', '2 70.7 52.', '0 70.9 42.', '37.', '1 55.8 38.', '36.', '8 57.3 +F EA TS be st me dia n 50.', '9 66.4 47.', '8 66.4 52.', '1 61.2 43.', '2 60.7 56.', '4 69.0 51.', '5 67.3 55.', '4 70.4 46.', '2 61.7 64.', '1 74.5 56.', '5 70.1 58.', '3 68.9 50.', '0 57.2 43.', '3 61.7 38.', '5 60.6 Table 3: Multilingual Results: We report token-level one-to-one and many-to-one accuracy on a variety of languages under several experimental settings (Section 5).', 'For each language and setting, we report one-to-one (11) and many- to-one (m-1) accuracies.', 'For each cell, the first row corresponds to the result using the best hyperparameter choice, where best is defined by the 11 metric.', 'The second row represents the performance of the median hyperparameter setting.', 'Model components cascade, so the row corresponding to +FEATS also includes the PRIOR component (see Section 3).', 'La ng ua ge # To ke ns # W or d Ty pe s # Ta gs E ng lis h D a ni s h D u tc h G e r m a n P or tu g u e s e S p a ni s h S w e di s h 1 1 7 3 7 6 6 9 4 3 8 6 2 0 3 5 6 8 6 9 9 6 0 5 2 0 6 6 7 8 8 9 3 3 4 1 9 1 4 6 7 4 9 2 0 6 1 8 3 5 6 2 8 3 9 3 7 2 3 2 5 2 8 9 3 1 1 6 4 5 8 2 0 0 5 7 4 5 2 5 1 2 5 4 2 2 4 7 4 1 Table 2: Statistics for various corpora utilized in experiments.', 'See Section 5.', 'The English data comes from the WSJ portion of the Penn Treebank and the other languages from the training set of the CoNLL-X multilingual dependency parsing shared task.', '5.1 Data Sets.', 'Following the setup of Johnson (2007), we use the whole of the Penn Treebank corpus for training and evaluation on English.', 'For other languages, we use the CoNLL-X multilingual dependency parsing shared task corpora (Buchholz and Marsi, 2006) which include gold POS tags (used for evaluation).', 'We train and test on the CoNLL-X training set.', 'Statistics for all data sets are shown in Table 2.', '5.2 Setup.', 'Models To assess the marginal utility of each component of the model (see Section 3), we incremen- tally increase its sophistication.', 'Specifically, we (+FEATS) utilizes the tag prior as well as features (e.g., suffixes and orthographic features), discussed in Section 3, for the P (W |T , Ï\x88) component.', 'Hyperparameters Our model has two Dirichlet concentration hyperparameters: α is the shared hyperparameter for the token-level HMM emission and transition distributions.', 'β is the shared hyperparameter for the tag assignment prior and word feature multinomials.', 'We experiment with four values for each hyperparameter resulting in 16 (α, β) combinations: α β 0.001, 0.01, 0.1, 1.0 0.01, 0.1, 1.0, 10 Iterations In each run, we performed 30 iterations of Gibbs sampling for the type assignment variables W .4 We use the final sample for evaluation.', 'Evaluation Metrics We report three metrics to evaluate tagging performance.', 'As is standard, we report the greedy one-to-one (Haghighi and Klein, 2006) and the many-to-one token-level accuracy obtained from mapping model states to gold POS tags.', 'We also report word type level accuracy, the fraction of word types assigned their majority tag (where the mapping between model state and tag is determined by greedy one-to-one mapping discussed above).5 For each language, we aggregate results in the following way: First, for each hyperparameter setting, evaluate three variants: The first model (1TW) only 4 Typically, the performance stabilizes after only 10 itera-.', 'encodes the one tag per word constraint and is uni form over type-level tag assignments.', 'The second model (+PRIOR) utilizes the independent prior over type-level tag assignments P (T |Ï\x88).', 'The final model tions.', '5 We choose these two metrics over the Variation Information measure due to the deficiencies discussed in Gao and Johnson (2008).', 'we perform five runs with different random initialization of sampling state.', 'Hyperparameter settings are sorted according to the median one-to-one metric over runs.', 'We report results for the best and median hyperparameter settings obtained in this way.', 'Specifically, for both settings we report results on the median run for each setting.', 'Tag set As is standard, for all experiments, we set the number of latent model tag states to the size of the annotated tag set.', 'The original tag set for the CoNLL-X Dutch data set consists of compounded tags that are used to tag multi-word units (MWUs) resulting in a tag set of over 300 tags.', 'We tokenize MWUs and their POS tags; this reduces the tag set size to 12.', 'See Table 2 for the tag set size of other languages.', 'With the exception of the Dutch data set, no other processing is performed on the annotated tags.', '6 Results and Analysis.', 'We report token- and type-level accuracy in Table 3 and 6 for all languages and system settings.', 'Our analysis and comparison focuses primarily on the one-to-one accuracy since it is a stricter metric than many-to-one accuracy, but also report many-to-one for completeness.', 'Comparison with state-of-the-art taggers For comparison we consider two unsupervised tag- gers: the HMM with log-linear features of Berg- Kirkpatrick et al.', '(2010) and the posterior regular- ization HMM of Grac¸a et al.', '(2009).', 'The system of Berg-Kirkpatrick et al.', '(2010) reports the best unsupervised results for English.', 'We consider two variants of Berg-Kirkpatrick et al.', '(2010)â\x80\x99s richest model: optimized via either EM or LBFGS, as their relative performance depends on the language.', 'Our model outperforms theirs on four out of five languages on the best hyperparameter setting and three out of five on the median setting, yielding an average absolute difference across languages of 12.9% and 3.9% for best and median settings respectively compared to their best EM or LBFGS performance.', 'While Berg-Kirkpatrick et al.', '(2010) consistently outperforms ours on English, we obtain substantial gains across other languages.', 'For instance, on Spanish, the absolute gap on median performance is 10%.', 'Top 5 Bot to m 5 Go ld NN P NN JJ CD NN S RB S PD T # â\x80\x9d , 1T W CD W RB NN S VB N NN PR P$ W DT : MD . +P RI OR CD JJ NN S WP $ NN RR B- , $ â\x80\x9d . +F EA TS JJ NN S CD NN P UH , PR P$ # . â\x80\x9c Table 5: Type-level English POS Tag Ranking: We list the top 5 and bottom 5 POS tags in the lexicon and the predictions of our models under the best hyperparameter setting.', 'Our second point of comparison is with Grac¸a et al.', '(2009), who also incorporate a sparsity constraint, but does via altering the model objective using posterior regularization.', 'We can only compare with Grac¸a et al.', '(2009) on Portuguese (Grac¸a et al.', '(2009) also report results on English, but on the reduced 17 tag set, which is not comparable to ours).', 'Their best model yields 44.5% one-to-one accuracy, compared to our best median 56.5% result.', 'However, our full model takes advantage of word features not present in Grac¸a et al.', '(2009).', 'Even without features, but still using the tag prior, our median result is 52.0%, still significantly outperforming Grac¸a et al.', '(2009).', 'Ablation Analysis We evaluate the impact of incorporating various linguistic features into our model in Table 3.', 'A novel element of our model is the ability to capture type-level tag frequencies.', 'For this experiment, we compare our model with the uniform tag assignment prior (1TW) with the learned prior (+PRIOR).', 'Across all languages, +PRIOR consistently outperforms 1TW, reducing error on average by 9.1% and 5.9% on best and median settings respectively.', 'Similar behavior is observed when adding features.', 'The difference between the featureless model (+PRIOR) and our full model (+FEATS) is 13.6% and 7.7% average error reduction on best and median settings respectively.', 'Overall, the difference between our most basic model (1TW) and our full model (+FEATS) is 21.2% and 13.1% for the best and median settings respectively.', 'One striking example is the error reduction for Spanish, which reduces error by 36.5% and 24.7% for the best and median settings respectively.', 'We observe similar trends when using another measure â\x80\x93 type-level accuracy (defined as the fraction of words correctly assigned their majority tag), according to which La ng ua ge M etr ic B K 10 E M B K 10 L B F G S G 10 F EA T S B es t F EA T S M ed ia n E ng lis h 1 1 m 1 4 8 . 3 6 8 . 1 5 6 . 0 7 5 . 5 â\x80\x93 â\x80\x93 5 0 . 9 6 6 . 4 4 7 . 8 6 6 . 4 D an is h 1 1 m 1 4 2 . 3 6 6 . 7 4 2 . 6 5 8 . 0 â\x80\x93 â\x80\x93 5 2 . 1 6 1 . 2 4 3 . 2 6 0 . 7 D ut ch 1 1 m 1 5 3 . 7 6 7 . 0 5 5 . 1 6 4 . 7 â\x80\x93 â\x80\x93 5 6 . 4 6 9 . 0 5 1 . 5 6 7 . 3 Po rtu gu es e 1 1 m 1 5 0 . 8 7 5 . 3 4 3 . 2 7 4 . 8 44 .5 69 .2 6 4 . 1 7 4 . 5 5 6 . 5 7 0 . 1 S pa ni sh 1 1 m 1 â\x80\x93 â\x80\x93 4 0 . 6 7 3 . 2 â\x80\x93 â\x80\x93 5 8 . 3 6 8 . 9 5 0 . 0 5 7 . 2 Table 4: Comparison of our method (FEATS) to state-of-the-art methods.', 'Feature-based HMM Model (Berg- Kirkpatrick et al., 2010): The KM model uses a variety of orthographic features and employs the EM or LBFGS optimization algorithm; Posterior regulariation model (Grac¸a et al., 2009): The G10 model uses the posterior regular- ization approach to ensure tag sparsity constraint.', 'La ng ua ge 1T W + P RI O R + F E A T S E ng lis h D a ni s h D u tc h G e r m a n P or tu g u e s e S p a ni s h S w e di s h 2 1.', '1 1 0.', '1 2 3.', '8 1 2.', '8 1 8.', '4 7 . 3 8 . 9 2 8 . 8 2 0 . 7 3 2 . 3 3 5 . 2 2 9 . 6 2 7 . 6 1 4 . 2 4 2 . 8 4 5 . 9 4 4 . 3 6 0 . 6 6 1 . 5 4 9 . 9 3 3 . 9 Table 6: Type-level Results: Each cell report the type- level accuracy computed against the most frequent tag of each word type.', 'The state-to-tag mapping is obtained from the best hyperparameter setting for 11 mapping shown in Table 3.', 'our full model yields 39.3% average error reduction across languages when compared to the basic configuration (1TW).', 'Table 5 provides insight into the behavior of different models in terms of the tagging lexicon they generate.', 'The table shows that the lexicon tag frequency predicated by our full model are the closest to the gold standard.', '7 Conclusion and Future Work.', 'We have presented a method for unsupervised part- of-speech tagging that considers a word type and its allowed POS tags as a primary element of the model.', 'This departure from the traditional token-based tagging approach allows us to explicitly capture type- level distributional properties of valid POS tag as signments as part of the model.', 'The resulting model is compact, efficiently learnable and linguistically expressive.', 'Our empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.', 'In this paper, we make a simplifying assumption of one-tag-per-word.', 'This assumption, however, is not inherent to type-based tagging models.', 'A promising direction for future work is to explicitly model a distribution over tags for each word type.', 'We hypothesize that modeling morphological information will greatly constrain the set of possible tags, thereby further refining the representation of the tag lexicon.', 'The authors acknowledge the support of the NSF (CAREER grant IIS0448168, and grant IIS 0904684).', 'We are especially grateful to Taylor Berg- Kirkpatrick for running additional experiments.', 'We thank members of the MIT NLP group for their suggestions and comments.', 'Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.']",abstractive -P11-1061_swastika,P11-1061,1,1,"Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.","We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language.","['Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based Projections', 'We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language.', 'Our method does not assume any knowledge about the target language (in particular no tagging dictionary is assumed), making it applicable to a wide array of resource-poor languages.', 'We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (Berg- Kirkpatrick et al., 2010).', 'Across eight European languages, our approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.', 'Supervised learning approaches have advanced the state-of-the-art on a variety of tasks in natural language processing, resulting in highly accurate systems.', 'Supervised part-of-speech (POS) taggers, for example, approach the level of inter-annotator agreement (Shen et al., 2007, 97.3% accuracy for English).', 'However, supervised methods rely on labeled training data, which is time-consuming and expensive to generate.', 'Unsupervised learning approaches appear to be a natural solution to this problem, as they require only unannotated text for training models.', 'Unfortunately, the best completely unsupervised English POS tagger (that does not make use of a tagging dictionary) reaches only 76.1% accuracy (Christodoulopoulos et al., 2010), making its practical usability questionable at best.', 'To bridge this gap, we consider a practically motivated scenario, in which we want to leverage existing resources from a resource-rich language (like English) when building tools for resource-poor foreign languages.1 We assume that absolutely no labeled training data is available for the foreign language of interest, but that we have access to parallel data with a resource-rich language.', 'This scenario is applicable to a large set of languages and has been considered by a number of authors in the past (Alshawi et al., 2000; Xi and Hwa, 2005; Ganchev et al., 2009).', 'Naseem et al. (2009) and Snyder et al.', '(2009) study related but different multilingual grammar and tagger induction tasks, where it is assumed that no labeled data at all is available.', 'Our work is closest to that of Yarowsky and Ngai (2001), but differs in two important ways.', 'First, we use a novel graph-based framework for projecting syntactic information across language boundaries.', 'To this end, we construct a bilingual graph over word types to establish a connection between the two languages (§3), and then use graph label propagation to project syntactic information from English to the foreign language (§4).', 'Second, we treat the projected labels as features in an unsupervised model (§5), rather than using them directly for supervised training.', 'To make the projection practical, we rely on the twelve universal part-of-speech tags of Petrov et al. (2011).', 'Syntactic universals are a well studied concept in linguistics (Carnie, 2002; Newmeyer, 2005), and were recently used in similar form by Naseem et al. (2010) for multilingual grammar induction.', 'Because there might be some controversy about the exact definitions of such universals, this set of coarse-grained POS categories is defined operationally, by collapsing language (or treebank) specific distinctions to a set of categories that exists across all languages.', 'These universal POS categories not only facilitate the transfer of POS information from one language to another, but also relieve us from using controversial evaluation metrics,2 by establishing a direct correspondence between the induced hidden states in the foreign language and the observed English labels.', 'We evaluate our approach on eight European languages (§6), and show that both our contributions provide consistent and statistically significant improvements.', 'Our final average POS tagging accuracy of 83.4% compares very favorably to the average accuracy of Berg-Kirkpatrick et al.’s monolingual unsupervised state-of-the-art model (73.0%), and considerably bridges the gap to fully supervised POS tagging performance (96.6%).', 'The focus of this work is on building POS taggers for foreign languages, assuming that we have an English POS tagger and some parallel text between the two languages.', 'Central to our approach (see Algorithm 1) is a bilingual similarity graph built from a sentence-aligned parallel corpus.', 'As discussed in more detail in §3, we use two types of vertices in our graph: on the foreign language side vertices correspond to trigram types, while the vertices on the English side are individual word types.', 'Graph construction does not require any labeled data, but makes use of two similarity functions.', 'The edge weights between the foreign language trigrams are computed using a co-occurence based similarity function, designed to indicate how syntactically similar the middle words of the connected trigrams are (§3.2).', 'To establish a soft correspondence between the two languages, we use a second similarity function, which leverages standard unsupervised word alignment statistics (§3.3).3 Since we have no labeled foreign data, our goal is to project syntactic information from the English side to the foreign side.', 'To initialize the graph we tag the English side of the parallel text using a supervised model.', 'By aggregating the POS labels of the English tokens to types, we can generate label distributions for the English vertices.', 'Label propagation can then be used to transfer the labels to the peripheral foreign vertices (i.e. the ones adjacent to the English vertices) first, and then among all of the foreign vertices (§4).', 'The POS distributions over the foreign trigram types are used as features to learn a better unsupervised POS tagger (§5).', 'The following three sections elaborate these different stages is more detail.', 'In graph-based learning approaches one constructs a graph whose vertices are labeled and unlabeled examples, and whose weighted edges encode the degree to which the examples they link have the same label (Zhu et al., 2003).', 'Graph construction for structured prediction problems such as POS tagging is non-trivial: on the one hand, using individual words as the vertices throws away the context necessary for disambiguation; on the other hand, it is unclear how to define (sequence) similarity if the vertices correspond to entire sentences.', 'Altun et al. (2005) proposed a technique that uses graph based similarity between labeled and unlabeled parts of structured data in a discriminative framework for semi-supervised learning.', 'More recently, Subramanya et al. (2010) defined a graph over the cliques in an underlying structured prediction model.', 'They considered a semi-supervised POS tagging scenario and showed that one can use a graph over trigram types, and edge weights based on distributional similarity, to improve a supervised conditional random field tagger.', 'We extend Subramanya et al.’s intuitions to our bilingual setup.', 'Because the information flow in our graph is asymmetric (from English to the foreign language), we use different types of vertices for each language.', 'The foreign language vertices (denoted by Vf) correspond to foreign trigram types, exactly as in Subramanya et al. (2010).', 'On the English side, however, the vertices (denoted by Ve) correspond to word types.', 'Because all English vertices are going to be labeled, we do not need to disambiguate them by embedding them in trigrams.', 'Furthermore, we do not connect the English vertices to each other, but only to foreign language vertices.4 The graph vertices are extracted from the different sides of a parallel corpus (De, Df) and an additional unlabeled monolingual foreign corpus Ff, which will be used later for training.', 'We use two different similarity functions to define the edge weights among the foreign vertices and between vertices from different languages.', 'Our monolingual similarity function (for connecting pairs of foreign trigram types) is the same as the one used by Subramanya et al. (2010).', 'We briefly review it here for completeness.', 'We define a symmetric similarity function K(uZ7 uj) over two foreign language vertices uZ7 uj E Vf based on the co-occurrence statistics of the nine feature concepts given in Table 1.', 'Each feature concept is akin to a random variable and its occurrence in the text corresponds to a particular instantiation of that random variable.', 'For each trigram type x2 x3 x4 in a sequence x1 x2 x3 x4 x5, we count how many times that trigram type co-occurs with the different instantiations of each concept, and compute the point-wise mutual information (PMI) between the two.5 The similarity between two trigram types is given by summing over the PMI values over feature instantiations that they have in common.', 'This is similar to stacking the different feature instantiations into long (sparse) vectors and computing the cosine similarity between them.', 'Finally, note that while most feature concepts are lexicalized, others, such as the suffix concept, are not.', 'Given this similarity function, we define a nearest neighbor graph, where the edge weight for the n most similar vertices is set to the value of the similarity function and to 0 for all other vertices.', 'We use N(u) to denote the neighborhood of vertex u, and fixed n = 5 in our experiments.', 'To define a similarity function between the English and the foreign vertices, we rely on high-confidence word alignments.', 'Since our graph is built from a parallel corpus, we can use standard word alignment techniques to align the English sentences De 5Note that many combinations are impossible giving a PMI value of 0; e.g., when the trigram type and the feature instantiation don’t have words in common. and their foreign language translations Df.6 Label propagation in the graph will provide coverage and high recall, and we therefore extract only intersected high-confidence (> 0.9) alignments De�f.', 'Based on these high-confidence alignments we can extract tuples of the form [u H v], where u is a foreign trigram type, whose middle word aligns to an English word type v. Our bilingual similarity function then sets the edge weights in proportion to these tuple counts.', 'So far the graph has been completely unlabeled.', 'To initialize the graph for label propagation we use a supervised English tagger to label the English side of the bitext.7 We then simply count the individual labels of the English tokens and normalize the counts to produce tag distributions over English word types.', 'These tag distributions are used to initialize the label distributions over the English vertices in the graph.', 'Note that since all English vertices were extracted from the parallel text, we will have an initial label distribution for all vertices in Ve.', 'A very small excerpt from an Italian-English graph is shown in Figure 1.', 'As one can see, only the trigrams [suo incarceramento ,], [suo iter ,] and [suo carattere ,] are connected to English words.', 'In this particular case, all English vertices are labeled as nouns by the supervised tagger.', 'In general, the neighborhoods can be more diverse and we allow a soft label distribution over the vertices.', 'It is worth noting that the middle words of the Italian trigrams are nouns too, which exhibits the fact that the similarity metric connects types having the same syntactic category.', 'In the label propagation stage, we propagate the automatic English tags to the aligned Italian trigram types, followed by further propagation solely among the Italian vertices. the Italian vertices are connected to an automatically labeled English vertex.', 'Label propagation is used to propagate these tags inwards and results in tag distributions for the middle word of each Italian trigram.', 'Given the bilingual graph described in the previous section, we can use label propagation to project the English POS labels to the foreign language.', 'We use label propagation in two stages to generate soft labels on all the vertices in the graph.', 'In the first stage, we run a single step of label propagation, which transfers the label distributions from the English vertices to the connected foreign language vertices (say, Vf�) at the periphery of the graph.', 'Note that because we extracted only high-confidence alignments, many foreign vertices will not be connected to any English vertices.', 'This stage of label propagation results in a tag distribution ri over labels y, which encodes the proportion of times the middle word of ui E Vf aligns to English words vy tagged with label y: The second stage consists of running traditional label propagation to propagate labels from these peripheral vertices Vf� to all foreign language vertices in the graph, optimizing the following objective: 5 POS Induction After running label propagation (LP), we compute tag probabilities for foreign word types x by marginalizing the POS tag distributions of foreign trigrams ui = x− x x+ over the left and right context words: where the qi (i = 1, ... , |Vf|) are the label distributions over the foreign language vertices and µ and ν are hyperparameters that we discuss in §6.4.', 'We use a squared loss to penalize neighboring vertices that have different label distributions: kqi − qjk2 = Ey(qi(y) − qj(y))2, and additionally regularize the label distributions towards the uniform distribution U over all possible labels Y.', 'It can be shown that this objective is convex in q.', 'The first term in the objective function is the graph smoothness regularizer which encourages the distributions of similar vertices (large wij) to be similar.', 'The second term is a regularizer and encourages all type marginals to be uniform to the extent that is allowed by the first two terms (cf. maximum entropy principle).', 'If an unlabeled vertex does not have a path to any labeled vertex, this term ensures that the converged marginal for this vertex will be uniform over all tags, allowing the middle word of such an unlabeled vertex to take on any of the possible tags.', 'While it is possible to derive a closed form solution for this convex objective function, it would require the inversion of a matrix of order |Vf|.', 'Instead, we resort to an iterative update based method.', 'We formulate the update as follows: where ∀ui ∈ Vf \\ Vfl, γi(y) and κi are defined as: We ran this procedure for 10 iterations.', 'We then extract a set of possible tags tx(y) by eliminating labels whose probability is below a threshold value τ: We describe how we choose τ in §6.4.', 'This vector tx is constructed for every word in the foreign vocabulary and will be used to provide features for the unsupervised foreign language POS tagger.', 'We develop our POS induction model based on the feature-based HMM of Berg-Kirkpatrick et al. (2010).', 'For a sentence x and a state sequence z, a first order Markov model defines a distribution: (9) where Val(X) corresponds to the entire vocabulary.', 'This locally normalized log-linear model can look at various aspects of the observation x, incorporating overlapping features of the observation.', 'In our experiments, we used the same set of features as BergKirkpatrick et al. (2010): an indicator feature based In a traditional Markov model, the emission distribution PΘ(Xi = xi |Zi = zi) is a set of multinomials.', 'The feature-based model replaces the emission distribution with a log-linear model, such that: on the word identity x, features checking whether x contains digits or hyphens, whether the first letter of x is upper case, and suffix features up to length 3.', 'All features were conjoined with the state z.', 'We trained this model by optimizing the following objective function: Note that this involves marginalizing out all possible state configurations z for a sentence x, resulting in a non-convex objective.', 'To optimize this function, we used L-BFGS, a quasi-Newton method (Liu and Nocedal, 1989).', 'For English POS tagging, BergKirkpatrick et al. (2010) found that this direct gradient method performed better (>7% absolute accuracy) than using a feature-enhanced modification of the Expectation-Maximization (EM) algorithm (Dempster et al., 1977).8 Moreover, this route of optimization outperformed a vanilla HMM trained with EM by 12%.', 'We adopted this state-of-the-art model because it makes it easy to experiment with various ways of incorporating our novel constraint feature into the log-linear emission model.', 'This feature ft incorporates information from the smoothed graph and prunes hidden states that are inconsistent with the thresholded vector tx.', 'The function A : F —* C maps from the language specific fine-grained tagset F to the coarser universal tagset C and is described in detail in §6.2: Note that when tx(y) = 1 the feature value is 0 and has no effect on the model, while its value is −oc when tx(y) = 0 and constrains the HMM’s state space.', 'This formulation of the constraint feature is equivalent to the use of a tagging dictionary extracted from the graph using a threshold T on the posterior distribution of tags for a given word type (Eq.', '7).', 'It would have therefore also been possible to use the integer programming (IP) based approach of Ravi and Knight (2009) instead of the feature-HMM for POS induction on the foreign side.', 'However, we do not explore this possibility in the current work.', 'Before presenting our results, we describe the datasets that we used, as well as two baselines.', 'We utilized two kinds of datasets in our experiments: (i) monolingual treebanks9 and (ii) large amounts of parallel text with English on one side.', 'The availability of these resources guided our selection of foreign languages.', 'For monolingual treebank data we relied on the CoNLL-X and CoNLL-2007 shared tasks on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007).', 'The parallel data came from the Europarl corpus (Koehn, 2005) and the ODS United Nations dataset (UN, 2006).', 'Taking the intersection of languages in these resources, and selecting languages with large amounts of parallel data, yields the following set of eight Indo-European languages: Danish, Dutch, German, Greek, Italian, Portuguese, Spanish and Swedish.', 'Of course, we are primarily interested in applying our techniques to languages for which no labeled resources are available.', 'However, we needed to restrict ourselves to these languages in order to be able to evaluate the performance of our approach.', 'We paid particular attention to minimize the number of free parameters, and used the same hyperparameters for all language pairs, rather than attempting language-specific tuning.', 'We hope that this will allow practitioners to apply our approach directly to languages for which no resources are available.', 'We use the universal POS tagset of Petrov et al. (2011) in our experiments.10 This set C consists of the following 12 coarse-grained tags: NOUN (nouns), VERB (verbs), ADJ (adjectives), ADV (adverbs), PRON (pronouns), DET (determiners), ADP (prepositions or postpositions), NUM (numerals), CONJ (conjunctions), PRT (particles), PUNC (punctuation marks) and X (a catch-all for other categories such as abbreviations or foreign words).', 'While there might be some controversy about the exact definition of such a tagset, these 12 categories cover the most frequent part-of-speech and exist in one form or another in all of the languages that we studied.', 'For each language under consideration, Petrov et al. (2011) provide a mapping A from the fine-grained language specific POS tags in the foreign treebank to the universal POS tags.', 'The supervised POS tagging accuracies (on this tagset) are shown in the last row of Table 2.', 'The taggers were trained on datasets labeled with the universal tags.', 'The number of latent HMM states for each language in our experiments was set to the number of fine tags in the language’s treebank.', 'In other words, the set of hidden states F was chosen to be the fine set of treebank tags.', 'Therefore, the number of fine tags varied across languages for our experiments; however, one could as well have fixed the set of HMM states to be a constant across languages, and created one mapping to the universal POS tagset.', 'To provide a thorough analysis, we evaluated three baselines and two oracles in addition to two variants of our graph-based approach.', 'We were intentionally lenient with our baselines: bilingual information by projecting POS tags directly across alignments in the parallel data.', 'For unaligned words, we set the tag to the most frequent tag in the corresponding treebank.', 'For each language, we took the same number of sentences from the bitext as there are in its treebank, and trained a supervised feature-HMM.', 'This can be seen as a rough approximation of Yarowsky and Ngai (2001).', 'We tried two versions of our graph-based approach: feature after the first stage of label propagation (Eq.', '1).', 'Because many foreign word types are not aligned to an English word (see Table 3), and we do not run label propagation on the foreign side, we expect the projected information to have less coverage.', 'Furthermore we expect the label distributions on the foreign to be fairly noisy, because the graph constraints have not been taken into account yet.', 'Our oracles took advantage of the labeled treebanks: While we tried to minimize the number of free parameters in our model, there are a few hyperparameters that need to be set.', 'Fortunately, performance was stable across various values, and we were able to use the same hyperparameters for all languages.', 'We used C = 1.0 as the L2 regularization constant in (Eq.', '10) and trained both EM and L-BFGS for 1000 iterations.', 'When extracting the vector t, used to compute the constraint feature from the graph, we tried three threshold values for r (see Eq.', '7).', 'Because we don’t have a separate development set, we used the training set to select among them and found 0.2 to work slightly better than 0.1 and 0.3.', 'For seven out of eight languages a threshold of 0.2 gave the best results for our final model, which indicates that for languages without any validation set, r = 0.2 can be used.', 'For graph propagation, the hyperparameter v was set to 2 x 10−6 and was not tuned.', 'The graph was constructed using 2 million trigrams; we chose these by truncating the parallel datasets up to the number of sentence pairs that contained 2 million trigrams.', 'Table 2 shows our complete set of results.', 'As expected, the vanilla HMM trained with EM performs the worst.', 'The feature-HMM model works better for all languages, generalizing the results achieved for English by Berg-Kirkpatrick et al. (2010).', 'Our “Projection” baseline is able to benefit from the bilingual information and greatly improves upon the monolingual baselines, but falls short of the “No LP” model by 2.5% on an average.', 'The “No LP” model does not outperform direct projection for German and Greek, but performs better for six out of eight languages.', 'Overall, it gives improvements ranging from 1.1% for German to 14.7% for Italian, for an average improvement of 8.3% over the unsupervised feature-HMM model.', 'For comparison, the completely unsupervised feature-HMM baseline accuracy on the universal POS tags for English is 79.4%, and goes up to 88.7% with a treebank dictionary.', 'Our full model (“With LP”) outperforms the unsupervised baselines and the “No LP” setting for all languages.', 'It falls short of the “Projection” baseline for German, but is statistically indistinguishable in terms of accuracy.', 'As indicated by bolding, for seven out of eight languages the improvements of the “With LP” setting are statistically significant with respect to the other models, including the “No LP” setting.11 Overall, it performs 10.4% better than the hitherto state-of-the-art feature-HMM baseline, and 4.6% better than direct projection, when we macro-average the accuracy over all languages.', 'Our full model outperforms the “No LP” setting because it has better vocabulary coverage and allows the extraction of a larger set of constraint features.', 'We tabulate this increase in Table 3.', 'For all languages, the vocabulary sizes increase by several thousand words.', 'Although the tag distributions of the foreign words (Eq.', '6) are noisy, the results confirm that label propagation within the foreign language part of the graph adds significant quality for every language.', 'Figure 2 shows an excerpt of a sentence from the Italian test set and the tags assigned by four different models, as well as the gold tags.', 'While the first three models get three to four tags wrong, our best model gets only one word wrong and is the most accurate among the four models for this example.', 'Examining the word fidanzato for the “No LP” and “With LP” models is particularly instructive.', 'As Figure 1 shows, this word has no high-confidence alignment in the Italian-English bitext.', 'As a result, its POS tag needs to be induced in the “No LP” case, while the 11A word level paired-t-test is significant at p < 0.01 for Danish, Greek, Italian, Portuguese, Spanish and Swedish, and p < 0.05 for Dutch. correct tag is available as a constraint feature in the “With LP” case.', 'We have shown the efficacy of graph-based label propagation for projecting part-of-speech information across languages.', 'Because we are interested in applying our techniques to languages for which no labeled resources are available, we paid particular attention to minimize the number of free parameters and used the same hyperparameters for all language pairs.', 'Our results suggest that it is possible to learn accurate POS taggers for languages which do not have any annotated data, but have translations into a resource-rich language.', 'Our results outperform strong unsupervised baselines as well as approaches that rely on direct projections, and bridge the gap between purely supervised and unsupervised POS tagging models.', 'We would like to thank Ryan McDonald for numerous discussions on this topic.', 'We would also like to thank Amarnag Subramanya for helping us with the implementation of label propagation and Shankar Kumar for access to the parallel data.', 'Finally, we thank Kuzman Ganchev and the three anonymous reviewers for helpful suggestions and comments on earlier drafts of this paper.']",abstractive -N04-1038,N04-1038,5,87,"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.","2.2.2 The Caseframe Network The first type of contextual role knowledge that BABAR learns is the Caseframe Network (CFNet), which identifies caseframes that co-occur in anaphor/antecedent resolutions.","['Unsupervised Learning of Contextual Role Knowledge for Coreference Resolution', 'We present a coreference resolver called BABAR that uses contextual role knowledge to evaluate possible antecedents for an anaphor.', 'BABAR uses information extraction patterns to identify contextual roles and creates four contextual role knowledge sources using unsupervised learning.', 'These knowledge sources determine whether the contexts surrounding an anaphor and antecedent are compatible.', 'BABAR applies a DempsterShafer probabilistic model to make resolutions based on evidence from the contextual role knowledge sources as well as general knowledge sources.', 'Experiments in two domains showed that the contextual role knowledge improved coreference performance, especially on pronouns.', 'The problem of coreference resolution has received considerable attention, including theoretical discourse models (e.g., (Grosz et al., 1995; Grosz and Sidner, 1998)), syntactic algorithms (e.g., (Hobbs, 1978; Lappin and Le- ass, 1994)), and supervised machine learning systems (Aone and Bennett, 1995; McCarthy and Lehnert, 1995; Ng and Cardie, 2002; Soon et al., 2001).', 'Most computational models for coreference resolution rely on properties of the anaphor and candidate antecedent, such as lexical matching, grammatical and syntactic features, semantic agreement, and positional information.', 'The focus of our work is on the use of contextual role knowledge for coreference resolution.', 'A contextual role represents the role that a noun phrase plays in an event or relationship.', 'Our work is motivated by the observation that contextual roles can be critically important in determining the referent of a noun phrase.', 'Consider the following sentences: (a) Jose Maria Martinez, Roberto Lisandy, and Dino Rossy, who were staying at a Tecun Uman hotel, were kidnapped by armed men who took them to an unknown place.', '(b) After they were released...', '(c) After they blindfolded the men...', 'In (b) â\x80\x9ctheyâ\x80\x9d refers to the kidnapping victims, but in (c) â\x80\x9ctheyâ\x80\x9d refers to the armed men.', 'The role that each noun phrase plays in the kidnapping event is key to distinguishing these cases.', 'The correct resolution in sentence (b) comes from knowledge that people who are kidnapped are often subsequently released.', 'The correct resolution in sentence (c) depends on knowledge that kidnappers frequently blindfold their victims.', 'We have developed a coreference resolver called BABAR that uses contextual role knowledge to make coreference decisions.', 'BABAR employs information extraction techniques to represent and learn role relationships.', 'Each pattern represents the role that a noun phrase plays in the surrounding context.', 'BABAR uses unsupervised learning to acquire this knowledge from plain text without the need for annotated training data.', 'Training examples are generated automatically by identifying noun phrases that can be easily resolved with their antecedents using lexical and syntactic heuristics.', 'BABAR then computes statistics over the training examples measuring the frequency with which extraction patterns and noun phrases co-occur in coreference resolutions.', 'In this paper, Section 2 begins by explaining how contextual role knowledge is represented and learned.', 'Section 3 describes the complete coreference resolution model, which uses the contextual role knowledge as well as more traditional coreference features.', 'Our coreference resolver also incorporates an existential noun phrase recognizer and a DempsterShafer probabilistic model to make resolution decisions.', 'Section 4 presents experimen tal results on two corpora: the MUC4 terrorism corpus, and Reuters texts about natural disasters.', 'Our results show that BABAR achieves good performance in both domains, and that the contextual role knowledge improves performance, especially on pronouns.', 'Finally, Section 5 explains how BABAR relates to previous work, and Section 6 summarizes our conclusions.', 'In this section, we describe how contextual role knowledge is represented and learned.', 'Section 2.1 describes how BABAR generates training examples to use in the learning process.', 'We refer to this process as Reliable Case Resolution because it involves finding cases of anaphora that can be easily resolved with their antecedents.', 'Section 2.2 then describes our representation for contextual roles and four types of contextual role knowledge that are learned from the training examples.', '2.1 Reliable Case Resolutions.', 'The first step in the learning process is to generate training examples consisting of anaphor/antecedent resolutions.', 'BABAR uses two methods to identify anaphors that can be easily and reliably resolved with their antecedent: lexical seeding and syntactic seeding.', '2.1.1 Lexical Seeding It is generally not safe to assume that multiple occurrences of a noun phrase refer to the same entity.', 'For example, the company may refer to Company X in one paragraph and Company Y in another.', 'However, lexically similar NPs usually refer to the same entity in two cases: proper names and existential noun phrases.', 'BABAR uses a named entity recognizer to identify proper names that refer to people and companies.', 'Proper names are assumed to be coreferent if they match exactly, or if they closely match based on a few heuristics.', 'For example, a personâ\x80\x99s full name will match with just their last name (e.g., â\x80\x9cGeorge Bushâ\x80\x9d and â\x80\x9cBushâ\x80\x9d), and a company name will match with and without a corporate suffix (e.g., â\x80\x9cIBM Corp.â\x80\x9d and â\x80\x9cIBMâ\x80\x9d).', 'Proper names that match are resolved with each other.', 'The second case involves existential noun phrases (Allen, 1995), which are noun phrases that uniquely specify an object or concept and therefore do not need a prior referent in the discourse.', 'In previous work (Bean and Riloff, 1999), we developed an unsupervised learning algorithm that automatically recognizes definite NPs that are existential without syntactic modification because their meaning is universally understood.', 'For example, a story can mention â\x80\x9cthe FBIâ\x80\x9d, â\x80\x9cthe White Houseâ\x80\x9d, or â\x80\x9cthe weatherâ\x80\x9d without any prior referent in the story.', 'Although these existential NPs do not need a prior referent, they may occur multiple times in a document.', 'By definition, each existential NP uniquely specifies an object or concept, so we can infer that all instances of the same existential NP are coreferent (e.g., â\x80\x9cthe FBIâ\x80\x9d always refers to the same entity).', 'Using this heuristic, BABAR identifies existential definite NPs in the training corpus using our previous learning algorithm (Bean and Riloff, 1999) and resolves all occurrences of the same existential NP with each another.1 2.1.2 Syntactic Seeding BABAR also uses syntactic heuristics to identify anaphors and antecedents that can be easily resolved.', 'Table 1 briefly describes the seven syntactic heuristics used by BABAR to resolve noun phrases.', 'Words and punctuation that appear in brackets are considered optional.', 'The anaphor and antecedent appear in boldface.', '1.', 'Reflexive pronouns with only 1 NP in scope..', 'Ex: The regime gives itself the right...', '2.', 'Relative pronouns with only 1 NP in scope..', 'Ex: The brigade, which attacked ...', 'Ex: Mr. Cristiani is the president ...', 'Ex: The government said it ...', 'Ex: He was found in San Jose, where ...', 'Ex: Mr. Cristiani, president of the country ...', 'Ex: Mr. Bush disclosed the policy by reading it...', 'Table 1: Syntactic Seeding Heuristics BABARâ\x80\x99s reliable case resolution heuristics produced a substantial set of anaphor/antecedent resolutions that will be the training data used to learn contextual role knowledge.', 'For terrorism, BABAR generated 5,078 resolutions: 2,386 from lexical seeding and 2,692 from syntactic seeding.', 'For natural disasters, BABAR generated 20,479 resolutions: 11,652 from lexical seeding and 8,827 from syntactic seeding.', '2.2 Contextual Role Knowledge.', 'Our representation of contextual roles is based on information extraction patterns that are converted into simple caseframes.', 'First, we describe how the caseframes are represented and learned.', 'Next, we describe four contextual role knowledge sources that are created from the training examples and the caseframes.', '2.2.1 The Caseframe Representation Information extraction (IE) systems use extraction patterns to identify noun phrases that play a specific role in 1 Our implementation only resolves NPs that occur in the same document, but in retrospect, one could probably resolve instances of the same existential NP in different documents too.', 'an event.', 'For IE, the system must be able to distinguish between semantically similar noun phrases that play different roles in an event.', 'For example, management succession systems must distinguish between a person who is fired and a person who is hired.', 'Terrorism systems must distinguish between people who perpetrate a crime and people who are victims of a crime.', 'We applied the AutoSlog system (Riloff, 1996) to our unannotated training texts to generate a set of extraction patterns for each domain.', 'Each extraction pattern represents a linguistic expression and a syntactic position indicating where a role filler can be found.', 'For example, kidnapping victims should be extracted from the subject of the verb â\x80\x9ckidnappedâ\x80\x9d when it occurs in the passive voice (the shorthand representation of this pattern would be â\x80\x9c were kidnappedâ\x80\x9d).', 'The types of patterns produced by AutoSlog are outlined in (Riloff, 1996).', 'Ideally weâ\x80\x99d like to know the thematic role of each extracted noun phrase, but AutoSlog does not generate thematic roles.', 'As a (crude) approximation, we normalize the extraction patterns with respect to active and passive voice and label those extractions as agents or patients.', 'For example, the passive voice pattern â\x80\x9c were kidnappedâ\x80\x9d and the active voice pattern â\x80\x9ckidnapped â\x80\x9d are merged into a single normalized pattern â\x80\x9ckidnapped â\x80\x9d.2 For the sake of sim plicity, we will refer to these normalized extraction patterns as caseframes.3 These caseframes can capture two types of contextual role information: (1) thematic roles corresponding to events (e.g, â\x80\x9c kidnappedâ\x80\x9d or â\x80\x9ckidnapped â\x80\x9d), and (2) predicate-argument relations associated with both verbs and nouns (e.g., â\x80\x9ckidnapped for â\x80\x9d or â\x80\x9cvehicle with â\x80\x9d).', 'We generate these caseframes automatically by running AutoSlog over the training corpus exhaustively so that it literally generates a pattern to extract every noun phrase in the corpus.', 'The learned patterns are then normalized and applied to the corpus.', 'This process produces a large set of caseframes coupled with a list of the noun phrases that they extracted.', 'The contextual role knowledge that BABAR uses for coreference resolution is derived from this caseframe data.', '2.2.2 The Caseframe Network The first type of contextual role knowledge that BABAR learns is the Caseframe Network (CFNet), which identifies caseframes that co-occur in anaphor/antecedent resolutions.', 'Our assumption is that caseframes that co-occur in resolutions often have a 2 This normalization is performed syntactically without semantics, so the agent and patient roles are not guaranteed to hold, but they usually do in practice.', '3 These are not full case frames in the traditional sense, but they approximate a simple case frame with a single slot.', 'conceptual relationship in the discourse.', 'For example, co-occurring caseframes may reflect synonymy (e.g., â\x80\x9c kidnappedâ\x80\x9d and â\x80\x9c abductedâ\x80\x9d) or related events (e.g., â\x80\x9c kidnappedâ\x80\x9d and â\x80\x9c releasedâ\x80\x9d).', 'We do not attempt to identify the types of relationships that are found.', 'BABAR merely identifies caseframes that frequently co-occur in coreference resolutions.', 'Te rro ris m Na tur al Dis ast ers mu rde r of < NP > kill ed

da ma ged wa s inj ure d in < NP > rep ort ed add ed occ urr ed cau se of < NP > stat ed add ed wr eak ed cro sse d per pet rat ed

con de mn ed

dri ver of < NP > car ryi ng Figure 1: Caseframe Network Examples Figure 1 shows examples of caseframes that co-occur in resolutions, both in the terrorism and natural disaster domains.', 'The terrorism examples reflect fairly obvious relationships: people who are murdered are killed; agents that â\x80\x9creportâ\x80\x9d things also â\x80\x9caddâ\x80\x9d and â\x80\x9cstateâ\x80\x9d things; crimes that are â\x80\x9cperpetratedâ\x80\x9d are often later â\x80\x9ccondemnedâ\x80\x9d.', 'In the natural disasters domain, agents are often forces of nature, such as hurricanes or wildfires.', 'Figure 1 reveals that an event that â\x80\x9cdamagedâ\x80\x9d objects may also cause injuries; a disaster that â\x80\x9coccurredâ\x80\x9d may be investigated to find its â\x80\x9ccauseâ\x80\x9d; a disaster may â\x80\x9cwreakâ\x80\x9d havoc as it â\x80\x9ccrossesâ\x80\x9d geographic regions; and vehicles that have a â\x80\x9cdriverâ\x80\x9d may also â\x80\x9ccarryâ\x80\x9d items.', 'During coreference resolution, the caseframe network provides evidence that an anaphor and prior noun phrase might be coreferent.', 'Given an anaphor, BABAR identifies the caseframe that would extract it from its sentence.', 'For each candidate antecedent, BABAR identifies the caseframe that would extract the candidate, pairs it with the anaphorâ\x80\x99s caseframe, and consults the CF Network to see if this pair of caseframes has co-occurred in previous resolutions.', 'If so, the CF Network reports that the anaphor and candidate may be coreferent.', '2.2.3 Lexical Caseframe Expectations The second type of contextual role knowledge learned by BABAR is Lexical Caseframe Expectations, which are used by the CFLex knowledge source.', 'For each case- frame, BABAR collects the head nouns of noun phrases that were extracted by the caseframe in the training corpus.', 'For each resolution in the training data, BABAR also associates the co-referring expression of an NP with the NPâ\x80\x99s caseframe.', 'For example, if X and Y are coreferent, then both X and Y are considered to co-occur with the caseframe that extracts X as well as the caseframe that extracts Y. We will refer to the set of nouns that co-occur with a caseframe as the lexical expectations of the case- frame.', 'Figure 2 shows examples of lexical expectations that were learned for both domains.', 'collected too.', 'We will refer to the semantic classes that co-occur with a caseframe as the semantic expectations of the caseframe.', 'Figure 3 shows examples of semantic expectations that were learned.', 'For example, BABAR learned that agents that â\x80\x9cassassinateâ\x80\x9d or â\x80\x9cinvestigate a causeâ\x80\x9d are usually humans or groups (i.e., organizations).', 'T e r r o r i s m Ca sef ra me Semantic Classes ass ass ina ted group, human inv esti gat ion int o < N P> event exp lod ed out sid e < N P> building N a t u r a l D i s a s t e r s Ca sef ra me Semantic Classes inv esti gat ing cau se group, human sur viv or of < N P> event, natphenom hit wit h < N P> attribute, natphenom Figure 3: Semantic Caseframe Expectations Figure 2: Lexical Caseframe Expectations To illustrate how lexical expectations are used, suppose we want to determine whether noun phrase X is the antecedent for noun phrase Y. If they are coreferent, then X and Y should be substitutable for one another in the story.4 Consider these sentences: (S1) Fred was killed by a masked man with a revolver.', '(S2) The burglar fired the gun three times and fled.', 'â\x80\x9cThe gunâ\x80\x9d will be extracted by the caseframe â\x80\x9cfired â\x80\x9d.', 'Its correct antecedent is â\x80\x9ca revolverâ\x80\x9d, which is extracted by the caseframe â\x80\x9ckilled with â\x80\x9d.', 'If â\x80\x9cgunâ\x80\x9d and â\x80\x9crevolverâ\x80\x9d refer to the same object, then it should also be acceptable to say that Fred was â\x80\x9ckilled with a gunâ\x80\x9d and that the burglar â\x80\x9cfireda revolverâ\x80\x9d.', 'During coreference resolution, BABAR checks (1) whether the anaphor is among the lexical expectations for the caseframe that extracts the candidate antecedent, and (2) whether the candidate is among the lexical expectations for the caseframe that extracts the anaphor.', 'If either case is true, then CFLex reports that the anaphor and candidate might be coreferent.', '2.2.4 Semantic Caseframe Expectations The third type of contextual role knowledge learned by BABAR is Semantic Caseframe Expectations.', 'Semantic expectations are analogous to lexical expectations except that they represent semantic classes rather than nouns.', 'For each caseframe, BABAR collects the semantic classes associated with the head nouns of NPs that were extracted by the caseframe.', 'As with lexical expections, the semantic classes of co-referring expressions are 4 They may not be perfectly substitutable, for example one NP may be more specific (e.g., â\x80\x9cheâ\x80\x9d vs. â\x80\x9cJohn F. Kennedyâ\x80\x9d).', 'But in most cases they can be used interchangably.', 'For each domain, we created a semantic dictionary by doing two things.', 'First, we parsed the training corpus, collected all the noun phrases, and looked up each head noun in WordNet (Miller, 1990).', 'We tagged each noun with the top-level semantic classes assigned to it in Word- Net.', 'Second, we identified the 100 most frequent nouns in the training corpus and manually labeled them with semantic tags.', 'This step ensures that the most frequent terms for each domain are labeled (in case some of them are not in WordNet) and labeled with the sense most appropriate for the domain.', 'Initially, we planned to compare the semantic classes of an anaphor and a candidate and infer that they might be coreferent if their semantic classes intersected.', 'However, using the top-level semantic classes of WordNet proved to be problematic because the class distinctions are too coarse.', 'For example, both a chair and a truck would be labeled as artifacts, but this does not at all suggest that they are coreferent.', 'So we decided to use semantic class information only to rule out candidates.', 'If two nouns have mutually exclusive semantic classes, then they cannot be coreferent.', 'This solution also obviates the need to perform word sense disambiguation.', 'Each word is simply tagged with the semantic classes corresponding to all of its senses.', 'If these sets do not overlap, then the words cannot be coreferent.', 'The semantic caseframe expectations are used in two ways.', 'One knowledge source, called WordSemCFSem, is analogous to CFLex: it checks whether the anaphor and candidate antecedent are substitutable for one another, but based on their semantic classes instead of the words themselves.', 'Given an anaphor and candidate, BABAR checks (1) whether the semantic classes of the anaphor intersect with the semantic expectations of the caseframe that extracts the candidate, and (2) whether the semantic classes of the candidate intersect with the semantic ex pectations of the caseframe that extracts the anaphor.', 'If one of these checks fails then this knowledge source reports that the candidate is not a viable antecedent for the anaphor.', 'A different knowledge source, called CFSemCFSem, compares the semantic expectations of the caseframe that extracts the anaphor with the semantic expectations of the caseframe that extracts the candidate.', 'If the semantic expectations do not intersect, then we know that the case- frames extract mutually exclusive types of noun phrases.', 'In this case, this knowledge source reports that the candidate is not a viable antecedent for the anaphor.', '2.3 Assigning Evidence Values.', 'Contextual role knowledge provides evidence as to whether a candidate is a plausible antecedent for an anaphor.', 'The two knowledge sources that use semantic expectations, WordSemCFSem and CFSemCFSem, always return values of -1 or 0.', '-1 means that an NP should be ruled out as a possible antecedent, and 0 means that the knowledge source remains neutral (i.e., it has no reason to believe that they cannot be coreferent).', 'The CFLex and CFNet knowledge sources provide positive evidence that a candidate NP and anaphor might be coreferent.', 'They return a value in the range [0,1], where 0 indicates neutrality and 1 indicates the strongest belief that the candidate and anaphor are coreferent.', 'BABAR uses the log-likelihood statistic (Dunning, 1993) to evaluate the strength of a co-occurrence relationship.', 'For each co-occurrence relation (noun/caseframe for CFLex, and caseframe/caseframe for CFNet), BABAR computes its log-likelihood value and looks it up in the Ï\x872 table to obtain a confidence level.', 'The confidence level is then used as the belief value for the knowledge source.', 'For example, if CFLex determines that the log- likelihood statistic for the co-occurrence of a particular noun and caseframe corresponds to the 90% confidence level, then CFLex returns .90 as its belief that the anaphor and candidate are coreferent.', '3 The Coreference Resolution Model.', 'Given a document to process, BABAR uses four modules to perform coreference resolution.', 'First, a non-anaphoric NP classifier identifies definite noun phrases that are existential, using both syntactic rules and our learned existential NP recognizer (Bean and Riloff, 1999), and removes them from the resolution process.', 'Second, BABAR performs reliable case resolution to identify anaphora that can be easily resolved using the lexical and syntactic heuristics described in Section 2.1.', 'Third, all remaining anaphora are evaluated by 11 different knowledge sources: the four contextual role knowledge sources just described and seven general knowledge sources.', 'Finally, a DempsterShafer probabilistic model evaluates the evidence provided by the knowledge sources for all candidate antecedents and makes the final resolution decision.', 'In this section, we describe the seven general knowledge sources and explain how the DempsterShafer model makes resolutions.', '3.1 General Knowledge Sources.', 'Figure 4 shows the seven general knowledge sources (KSs) that represent features commonly used for coreference resolution.', 'The gender, number, and scoping KSs eliminate candidates from consideration.', 'The scoping heuristics are based on the anaphor type: for reflexive pronouns the scope is the current clause, for relative pronouns it is the prior clause following its VP, for personal pronouns it is the anaphorâ\x80\x99s sentence and two preceding sentences, and for definite NPs it is the anaphorâ\x80\x99s sentence and eight preceding sentences.', 'The semantic agreement KS eliminates some candidates, but also provides positive evidence in one case: if the candidate and anaphor both have semantic tags human, company, date, or location that were assigned via NER or the manually labeled dictionary entries.', 'The rationale for treating these semantic labels differently is that they are specific and reliable (as opposed to the WordNet classes, which are more coarse and more noisy due to polysemy).', 'KS Function Ge nde r filters candidate if gender doesnâ\x80\x99t agree.', 'Nu mb er filters candidate if number doesnâ\x80\x99t agree.', 'Sc opi ng filters candidate if outside the anaphorâ\x80\x99s scope.', 'Se ma nti c (a) filters candidate if its semantic tags d o n â\x80\x99 t i n t e r s e c t w i t h t h o s e o f t h e a n a p h o r .', '( b ) s u p p o r t s c a n d i d a t e i f s e l e c t e d s e m a n t i c t a g s m a t c h t h o s e o f t h e a n a p h o r . Le xic al computes degree of lexical overlap b e t w e e n t h e c a n d i d a t e a n d t h e a n a p h o r . Re cen cy computes the relative distance between the c a n d i d a t e a n d t h e a n a p h o r . Sy nR ole computes relative frequency with which the c a n d i d a t e â\x80\x99 s s y n t a c t i c r o l e o c c u r s i n r e s o l u t i o n s . Figure 4: General Knowledge Sources The Lexical KS returns 1 if the candidate and anaphor are identical, 0.5 if their head nouns match, and 0 otherwise.', 'The Recency KS computes the distance between the candidate and the anaphor relative to its scope.', 'The SynRole KS computes the relative frequency with which the candidatesâ\x80\x99 syntactic role (subject, direct object, PP object) appeared in resolutions in the training set.', 'During development, we sensed that the Recency and Syn- role KSs did not deserve to be on equal footing with the other KSs because their knowledge was so general.', 'Consequently, we cut their evidence values in half to lessen their influence.', '3.2 The DempsterShafer Decision Model.', 'BABAR uses a DempsterShafer decision model (Stefik, 1995) to combine the evidence provided by the knowledge sources.', 'Our motivation for using DempsterShafer is that it provides a well-principled framework for combining evidence from multiple sources with respect to competing hypotheses.', 'In our situation, the competing hypotheses are the possible antecedents for an anaphor.', 'An important aspect of the DempsterShafer model is that it operates on sets of hypotheses.', 'If evidence indicates that hypotheses C and D are less likely than hypotheses A and B, then probabilities are redistributed to reflect the fact that {A, B} is more likely to contain the answer than {C, D}.', 'The ability to redistribute belief values across sets rather than individual hypotheses is key.', 'The evidence may not say anything about whether A is more likely than B, only that C and D are not likely.', 'Each set is assigned two values: belief and plausibility.', 'Initially, the DempsterShafer model assumes that all hypotheses are equally likely, so it creates a set called θ that includes all hypotheses.', 'θ has a belief value of 1.0, indicating complete certainty that the correct hypothesis is included in the set, and a plausibility value of 1.0, indicating that there is no evidence for competing hypotheses.5 As evidence is collected and the likely hypotheses are whittled down, belief is redistributed to subsets of θ.', 'Formally, the DempsterShafer theory defines a probability density function m(S), where S is a set of hypotheses.', 'm(S) represents the belief that the correct hypothesis is included in S. The model assumes that evidence also arrives as a probability density function (pdf) over sets of hypotheses.6 Integrating new evidence into the existing model is therefore simply a matter of defining a function to merge pdfs, one representing the current belief system and one representing the beliefs of the new evidence.', 'The DempsterShafer rule for combining pdfs is: to {C}, meaning that it is 70% sure the correct hypothesis is C. The intersection of these sets is the null set because these beliefs are contradictory.', 'The belief value that would have been assigned to the intersection of these sets is .60*.70=.42, but this belief has nowhere to go because the null set is not permissible in the model.7 So this probability mass (.42) has to be redistributed.', 'DempsterShafer handles this by re-normalizing all the belief values with respect to only the non-null sets (this is the purpose of the denominator in Equation 1).', 'In our coreference resolver, we define θ to be the set of all candidate antecedents for an anaphor.', 'Each knowledge source then assigns a probability estimate to each candidate, which represents its belief that the candidate is the antecedent for the anaphor.', 'The probabilities are incorporated into the DempsterShafer model using Equation 1.', 'To resolve the anaphor, we survey the final belief values assigned to each candidateâ\x80\x99s singleton set.', 'If a candidate has a belief value â\x89¥ .50, then we select that candidate as the antecedent for the anaphor.', 'If no candidate satisfies this condition (which is often the case), then the anaphor is left unresolved.', 'One of the strengths of the DempsterShafer model is its natural ability to recognize when several credible hypotheses are still in play.', 'In this situation, BABAR takes the conservative approach and declines to make a resolution.', '4 Evaluation Results.', '4.1 Corpora.', 'We evaluated BABAR on two domains: terrorism and natural disasters.', 'We used the MUC4 terrorism corpus (MUC4 Proceedings, 1992) and news articles from the Reuterâ\x80\x99s text collection8 that had a subject code corresponding to natural disasters.', 'For each domain, we created a blind test set by manually annotating 40 doc uments with anaphoric chains, which represent sets of m3 (S) = ) X â\x88©Y =S 1 â\x88\x92 ) m1 (X ) â\x88\x97 m2 (Y ) m1 (X ) â\x88\x97 m2 (Y ) (1) noun phrases that are coreferent (as done for MUC6 (MUC6 Proceedings, 1995)).', 'In the terrorism domain, 1600 texts were used for training and the 40 test docu X â\x88©Y =â\x88\x85 All sets of hypotheses (and their corresponding belief values) in the current model are crossed with the sets of hypotheses (and belief values) provided by the new evidence.', 'Sometimes, however, these beliefs can be contradictory.', 'For example, suppose the current model assigns a belief value of .60 to {A, B}, meaning that it is 60% sure that the correct hypothesis is either A or B. Then new evidence arrives with a belief value of .70 assigned 5 Initially there are no competing hypotheses because all hypotheses are included in θ by definition.', '6 Our knowledge sources return some sort of probability estimate, although in some cases this estimate is not especially well-principled (e.g., the Recency KS).', 'ments contained 322 anaphoric links.', 'For the disasters domain, 8245 texts were used for training and the 40 test documents contained 447 anaphoric links.', 'In recent years, coreference resolvers have been evaluated as part of MUC6 and MUC7 (MUC7 Proceedings, 1998).', 'We considered using the MUC6 and MUC7 data sets, but their training sets were far too small to learn reliable co-occurrence statistics for a large set of contextual role relationships.', 'Therefore we opted to use the much 7 The DempsterShafer theory assumes that one of the hypotheses in θ is correct, so eliminating all of the hypotheses violates this assumption.', '8 Volume 1, English language, 19961997, Format version 1, correction level 0 An ap ho r T e r r o r i s m R e c Pr F D i s a s t e r s R e c Pr F De f. NP s Pro no uns .43 .79 .55 .50 .72 .59 .42 .91 .58 .42 .82 .56 Tot al .46 .76 .57 .42 .87 .57 Table 2: General Knowledge Sources Table 4: Individual Performance of KSs for Terrorism Table 3: General + Contextual Role Knowledge Sources larger MUC4 and Reuters corpora.9 4.2 Experiments.', 'We adopted the MUC6 guidelines for evaluating coreference relationships based on transitivity in anaphoric chains.', 'For example, if {N P1, N P2, N P3} are all coreferent, then each NP must be linked to one of the other two NPs.', 'First, we evaluated BABAR using only the seven general knowledge sources.', 'Table 2 shows BABARâ\x80\x99s performance.', 'We measured recall (Rec), precision (Pr), and the F-measure (F) with recall and precision equally weighted.', 'BABAR achieved recall in the 4250% range for both domains, with 76% precision overall for terrorism and 87% precision for natural disasters.', 'We suspect that the higher precision in the disasters domain may be due to its substantially larger training corpus.', 'Table 3 shows BABARâ\x80\x99s performance when the four contextual role knowledge sources are added.', 'The F- measure score increased for both domains, reflecting a substantial increase in recall with a small decrease in precision.', 'The contextual role knowledge had the greatest impact on pronouns: +13% recall for terrorism and +15% recall for disasters, with a +1% precision gain in terrorism and a small precision drop of -3% in disasters.', 'The difference in performance between pronouns and definite noun phrases surprised us.', 'Analysis of the data revealed that the contextual role knowledge is especially helpful for resolving pronouns because, in general, they are semantically weaker than definite NPs.', 'Since pronouns carry little semantics of their own, resolving them depends almost entirely on context.', 'In contrast, even though context can be helpful for resolving definite NPs, context can be trumped by the semantics of the nouns themselves.', 'For example, even if the contexts surrounding an anaphor and candidate match exactly, they are not coreferent if they have substantially different meanings 9 We would be happy to make our manually annotated test data available to others who also want to evaluate their coreference resolver on the MUC4 or Reuters collections.', 'Table 5: Individual Performance of KSs for Disasters (e.g., â\x80\x9cthe mayorâ\x80\x9d vs. â\x80\x9cthe journalistâ\x80\x9d).', 'We also performed experiments to evaluate the impact of each type of contextual role knowledge separately.', 'Tables 4 and 5 show BABARâ\x80\x99s performance when just one contextual role knowledge source is used at a time.', 'For definite NPs, the results are a mixed bag: some knowledge sources increased recall a little, but at the expense of some precision.', 'For pronouns, however, all of the knowledge sources increased recall, often substantially, and with little if any decrease in precision.', 'This result suggests that all of contextual role KSs can provide useful information for resolving anaphora.', 'Tables 4 and 5 also show that putting all of the contextual role KSs in play at the same time produces the greatest performance gain.', 'There are two possible reasons: (1) the knowledge sources are resolving different cases of anaphora, and (2) the knowledge sources provide multiple pieces of evidence in support of (or against) a candidate, thereby acting synergistically to push the DempsterShafer model over the belief threshold in favor of a single candidate.', '5 Related Work.', 'Many researchers have developed coreference resolvers, so we will only discuss the methods that are most closely related to BABAR.', 'Dagan and Itai (Dagan and Itai, 1990) experimented with co-occurrence statistics that are similar to our lexical caseframe expectations.', 'Their work used subject-verb, verb-object, and adjective-noun relations to compare the contexts surrounding an anaphor and candidate.', 'However their work did not consider other types of lexical expectations (e.g., PP arguments), semantic expectations, or context comparisons like our case- frame network.(Niyu et al., 1998) used unsupervised learning to ac quire gender, number, and animacy information from resolutions produced by a statistical pronoun resolver.', 'The learned information was recycled back into the resolver to improve its performance.', 'This approach is similar to BABAR in that they both acquire knowledge from earlier resolutions.', '(Kehler, 1997) also used a DempsterShafer model to merge evidence from different sources for template-level coreference.', 'Several coreference resolvers have used supervised learning techniques, such as decision trees and rule learners (Aone and Bennett, 1995; McCarthy and Lehnert, 1995; Ng and Cardie, 2002; Soon et al., 2001).', 'These systems rely on a training corpus that has been manually annotated with coreference links.', '6 Conclusions.', 'The goal of our research was to explore the use of contextual role knowledge for coreference resolution.', 'We identified three ways that contextual roles can be exploited: (1) by identifying caseframes that co-occur in resolutions, (2) by identifying nouns that co-occur with case- frames and using them to crosscheck anaphor/candidate compatibility, (3) by identifying semantic classes that co- occur with caseframes and using them to crosscheck anaphor/candidate compatability.', 'We combined evidence from four contextual role knowledge sources with evidence from seven general knowledge sources using a DempsterShafer probabilistic model.', 'Our coreference resolver performed well in two domains, and experiments showed that each contextual role knowledge source contributed valuable information.', 'We found that contextual role knowledge was more beneficial for pronouns than for definite noun phrases.', 'This suggests that different types of anaphora may warrant different treatment: definite NP resolution may depend more on lexical semantics, while pronoun resolution may depend more on contextual semantics.', 'In future work, we plan to follow-up on this approach and investigate other ways that contextual role knowledge can be used.', '7 Acknowledgements.', 'This work was supported in part by the National Science Foundation under grant IRI9704240.', 'The inventions disclosed herein are the subject of a patent application owned by the University of Utah and licensed on an exclusive basis to Attensity Corporation.']",abstractive -W04-0213,W04-0213,1,3,"This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.","A corpus of German newspaper commentaries has been assembled at Potsdam University, and annotated with different linguistic information, to different degrees.","['The Potsdam Commentary Corpus', 'A corpus of German newspaper commentaries has been assembled and annotated with different information (and currently, to different degrees): part-of-speech, syntax, rhetorical structure, connectives, co-reference, and information structure.', 'The paper explains the design decisions taken in the annotations, and describes a number of applications using this corpus with its multi-layer annotation.', 'A corpus of German newspaper commentaries has been assembled at Potsdam University, and annotated with different linguistic information, to different degrees.', 'Two aspects of the corpus have been presented in previous papers ((Re- itter, Stede 2003) on underspecified rhetorical structure; (Stede 2003) on the perspective of knowledge-based summarization).', 'This paper, however, provides a comprehensive overview of the data collection effort and its current state.', 'At present, the â\x80\x98Potsdam Commentary Corpusâ\x80\x99 (henceforth â\x80\x98PCCâ\x80\x99 for short) consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.', 'The choice of the genre commentary resulted from the fact that an investigation of rhetorical structure, its interaction with other aspects of discourse structure, and the prospects for its automatic derivation are the key motivations for building up the corpus.', 'Commentaries argue in favor of a specific point of view toward some political issue, often dicussing yet dismissing other points of view; therefore, they typically offer a more interesting rhetorical structure than, say, narrative text or other portions of newspapers.', 'The choice of the particular newspaper was motivated by the fact that the language used in a regional daily is somewhat simpler than that of papers read nationwide.', '(Again, the goal of also in structural features.', 'As an indication, in our core corpus, we found an average sentence length of 15.8 words and 1.8 verbs per sentence, whereas a randomly taken sample of ten commentaries from the national papers Su¨ddeutsche Zeitung and Frankfurter Allgemeine has 19.6 words and 2.1 verbs per sentence.', 'The commentaries in PCC are all of roughly the same length, ranging from 8 to 10 sentences.', 'For illustration, an English translation of one of the commentaries is given in Figure 1.', 'The paper is organized as follows: Section 2 explains the different layers of annotation that have been produced or are being produced.', 'Section 3 discusses the applications that have been completed with PCC, or are under way, or are planned for the future.', 'Section 4 draws some conclusions from the present state of the effort.', 'The corpus has been annotated with six different types of information, which are characterized in the following subsections.', 'Not all the layers have been produced for all the texts yet.', 'There is a â\x80\x98core corpusâ\x80\x99 of ten commentaries, for which the range of information (except for syntax) has been completed; the remaining data has been annotated to different degrees, as explained below.', 'All annotations are done with specific tools and in XML; each layer has its own DTD.', 'This offers the well-known advantages for inter- changability, but it raises the question of how to query the corpus across levels of annotation.', 'We will briefly discuss this point in Section 3.1.', '2.1 Part-of-speech tags.', 'All commentaries have been tagged with part-of-speech information using Brantsâ\x80\x99 TnT1 tagger and the Stuttgart/Tu¨bingen Tag Set automatic analysis was responsible for this decision.)', 'This is manifest in the lexical choices but 1 www.coli.unisb.de/â\x88¼thorsten/tnt/ Dagmar Ziegler is up to her neck in debt.', 'Due to the dramatic fiscal situation in Brandenburg she now surprisingly withdrew legislation drafted more than a year ago, and suggested to decide on it not before 2003.', 'Unexpectedly, because the ministries of treasury and education both had prepared the teacher plan together.', 'This withdrawal by the treasury secretary is understandable, though.', 'It is difficult to motivate these days why one ministry should be exempt from cutbacks â\x80\x94 at the expense of the others.', 'Reicheâ\x80\x99s colleagues will make sure that the concept is waterproof.', 'Indeed there are several open issues.', 'For one thing, it is not clear who is to receive settlements or what should happen in case not enough teachers accept the offer of early retirement.', 'Nonetheless there is no alternative to Reicheâ\x80\x99s plan.', 'The state in future has not enough work for its many teachers.', 'And time is short.', 'The significant drop in number of pupils will begin in the fall of 2003.', 'The government has to make a decision, and do it quickly.', 'Either save money at any cost - or give priority to education.', 'Figure 1: Translation of PCC sample commentary (STTS)2.', '2.2 Syntactic structure.', 'Annotation of syntactic structure for the core corpus has just begun.', 'We follow the guidelines developed in the TIGER project (Brants et al. 2002) for syntactic annotation of German newspaper text, using the Annotate3 tool for interactive construction of tree structures.', '2.3 Rhetorical structure.', 'All commentaries have been annotated with rhetorical structure, using RSTTool4 and the definitions of discourse relations provided by Rhetorical Structure Theory (Mann, Thompson 1988).', 'Two annotators received training with the RST definitions and started the process with a first set of 10 texts, the results of which were intensively discussed and revised.', 'Then, the remaining texts were annotated and cross-validated, always with discussions among the annotators.', 'Thus we opted not to take the step of creating more precise written annotation guidelines (as (Carlson, Marcu 2001) did for English), which would then allow for measuring inter-annotator agreement.', 'The motivation for our more informal approach was the intuition that there are so many open problems in rhetorical analysis (and more so for German than for English; see below) that the main task is qualitative investigation, whereas rigorous quantitative analyses should be performed at a later stage.', 'One conclusion drawn from this annotation effort was that for humans and machines alike, 2 www.sfs.nphil.unituebingen.de/Elwis/stts/ stts.html 3 www.coli.unisb.de/sfb378/negra-corpus/annotate.', 'html 4 www.wagsoft.com/RSTTool assigning rhetorical relations is a process loaded with ambiguity and, possibly, subjectivity.', 'We respond to this on the one hand with a format for its underspecification (see 2.4) and on the other hand with an additional level of annotation that attends only to connectives and their scopes (see 2.5), which is intended as an intermediate step on the long road towards a systematic and objective treatment of rhetorical structure.', '2.4 Underspecified rhetorical structure.', 'While RST (Mann, Thompson 1988) proposed that a single relation hold between adjacent text segments, SDRT (Asher, Lascarides 2003) maintains that multiple relations may hold simultaneously.', 'Within the RST â\x80\x9cuser communityâ\x80\x9d there has also been discussion whether two levels of discourse structure should not be systematically distinguished (intentional versus informational).', 'Some relations are signalled by subordinating conjunctions, which clearly demarcate the range of the text spans related (matrix clause, embedded clause).', 'When the signal is a coordinating conjunction, the second span is usually the clause following the conjunction; the first span is often the clause preceding it, but sometimes stretches further back.', 'When the connective is an adverbial, there is much less clarity as to the range of the spans.', 'Assigning rhetorical relations thus poses questions that can often be answered only subjectively.', 'Our annotators pointed out that very often they made almost random decisions as to what relation to choose, and where to locate the boundary of a span.', '(Carlson, Marcu 2001) responded to this situation with relatively precise (and therefore long!)', 'annotation guidelines that tell annotators what to do in case of doubt.', 'Quite often, though, these directives fulfill the goal of increasing annotator agreement without in fact settling the theoretical question; i.e., the directives are clear but not always very well motivated.', 'In (Reitter, Stede 2003) we went a different way and suggested URML5, an XML format for underspecifying rhetorical structure: a number of relations can be assigned instead of a single one, competing analyses can be represented with shared forests.', 'The rhetorical structure annotations of PCC have all been converted to URML.', 'There are still some open issues to be resolved with the format, but it represents a first step.', 'What ought to be developed now is an annotation tool that can make use of the format, allow for underspecified annotations and visualize them accordingly.', '2.5 Connectives with scopes.', 'For the â\x80\x98coreâ\x80\x99 portion of PCC, we found that on average, 35% of the coherence relations in our RST annotations are explicitly signalled by a lexical connective.6 When adding the fact that connectives are often ambiguous, one has to conclude that prospects for an automatic analysis of rhetorical structure using shallow methods (i.e., relying largely on connectives) are not bright â\x80\x94 but see Sections 3.2 and 3.3 below.', 'Still, for both human and automatic rhetorical analysis, connectives are the most important source of surface information.', 'We thus decided to pay specific attention to them and introduce an annotation layer for connectives and their scopes.', 'This was also inspired by the work on the Penn Discourse Tree Bank7 , which follows similar goals for English.', 'For effectively annotating connectives/scopes, we found that existing annotation tools were not well-suited, for two reasons: â\x80¢ Some tools are dedicated to modes of annotation (e.g., tiers), which could only quite un-intuitively be used for connectives and scopes.', 'â\x80¢ Some tools would allow for the desired annotation mode, but are so complicated (they can be used for many other purposes as well) that annotators take a long time getting used to them.', '5 â\x80\x98Underspecified Rhetorical Markup Languageâ\x80\x99 6 This confirms the figure given by (Schauer, Hahn.', 'Consequently, we implemented our own annotation tool ConAno in Java (Stede, Heintze 2004), which provides specifically the functionality needed for our purpose.', 'It reads a file with a list of German connectives, and when a text is opened for annotation, it highlights all the words that show up in this list; these will be all the potential connectives.', 'The annotator can then â\x80\x9cclick awayâ\x80\x9d those words that are here not used as connectives (such as the conjunction und (â\x80\x98andâ\x80\x99) used in lists, or many adverbials that are ambiguous between connective and discourse particle).', 'Then, moving from connective to connective, ConAno sometimes offers suggestions for its scope (using heuristics like â\x80\x98for sub- junctor, mark all words up to the next comma as the first segmentâ\x80\x99), which the annotator can accept with a mouseclick or overwrite, marking instead the correct scope with the mouse.', 'When finished, the whole material is written into an XML-structured annotation file.', '2.6 Co-reference.', 'We developed a first version of annotation guidelines for co-reference in PCC (Gross 2003), which served as basis for annotating the core corpus but have not been empirically evaluated for inter-annotator agreement yet.', 'The tool we use is MMAX8, which has been specifically designed for marking co-reference.', 'Upon identifying an anaphoric expression (currently restricted to: pronouns, prepositional adverbs, definite noun phrases), the an- notator first marks the antecedent expression (currently restricted to: various kinds of noun phrases, prepositional phrases, verb phrases, sentences) and then establishes the link between the two.', 'Links can be of two different kinds: anaphoric or bridging (definite noun phrases picking up an antecedent via world-knowledge).', 'â\x80¢ Anaphoric links: the annotator is asked to specify whether the anaphor is a repetition, partial repetition, pronoun, epithet (e.g., Andy Warhol â\x80\x93 the PopArt artist), or is-a (e.g., Andy Warhol was often hunted by photographers.', 'This fact annoyed especially his dog...).', 'â\x80¢ Bridging links: the annotator is asked to specify the type as part-whole, cause-effect (e.g., She had an accident.', 'The wounds are still healing.), entity-attribute (e.g., She 2001), who determined that in their corpus of German computer tests, 38% of relations were lexically signalled.', '7 www.cis.upenn.edu/â\x88¼pdtb/ 8 www.eml-research.de/english/Research/NLP/ Downloads had to buy a new car.', 'The price shocked her.), or same-kind (e.g., Her health insurance paid for the hospital fees, but the automobile insurance did not cover the repair.).', 'For displaying and querying the annoated text, we make use of the Annis Linguistic Database developed in our group for a large research effort (â\x80\x98Sonderforschungsbereichâ\x80\x99) revolving around 9 2.7 Information structure.', 'information structure.', 'The implementation is In a similar effort, (G¨otze 2003) developed a proposal for the theory-neutral annotation of information structure (IS) â\x80\x94 a notoriously difficult area with plenty of conflicting and overlapping terminological conceptions.', 'And indeed, converging on annotation guidelines is even more difficult than it is with co-reference.', 'Like in the co-reference annotation, G¨otzeâ\x80\x99s proposal has been applied by two annotators to the core corpus but it has not been systematically evaluated yet.', 'We use MMAX for this annotation as well.', 'Here, annotation proceeds in two phases: first, the domains and the units of IS are marked as such.', 'The domains are the linguistic spans that are to receive an IS-partitioning, and the units are the (smaller) spans that can play a role as a constituent of such a partitioning.', 'Among the IS-units, the referring expressions are marked as such and will in the second phase receive a label for cognitive status (active, accessible- text, accessible-situation, inferrable, inactive).', 'They are also labelled for their topicality (yes / no), and this annotation is accompanied by a confidence value assigned by the annotator (since it is a more subjective matter).', 'Finally, the focus/background partition is annotated, together with the focus question that elicits the corresponding answer.', 'Asking the annotator to also formulate the question is a way of arriving at more reproducible decisions.', 'For all these annotation taks, G¨otze developed a series of questions (essentially a decision tree) designed to lead the annotator to the ap propriate judgement.', 'Having explained the various layers of annotation in PCC, we now turn to the question what all this might be good for.', 'This concerns on the one hand the basic question of retrieval, i.e. searching for information across the annotation layers (see 3.1).', 'On the other hand, we are interested in the application of rhetorical analysis or â\x80\x98discourse parsingâ\x80\x99 (3.2 and 3.3), in text generation (3.4), and in exploiting the corpus for the development of improved models of discourse structure (3.5).', 'basically complete, yet some improvements and extensions are still under way.', 'The web-based Annis imports data in a variety of XML formats and tagsets and displays it in a tier-orientedway (optionally, trees can be drawn more ele gantly in a separate window).', 'Figure 2 shows a screenshot (which is of somewhat limited value, though, as color plays a major role in signalling the different statuses of the information).', 'In the small window on the left, search queries can be entered, here one for an NP that has been annotated on the co-reference layer as bridging.', 'The portions of information in the large window can be individually clicked visible or invisible; here we have chosen to see (from top to bottom) â\x80¢ the full text, â\x80¢ the annotation values for the activated annotation set (co-reference), â\x80¢ the actual annotation tiers, and â\x80¢ the portion of text currently â\x80\x98in focusâ\x80\x99 (which also appears underlined in the full text).', 'Different annotations of the same text are mapped into the same data structure, so that search queries can be formulated across annotation levels.', 'Thus it is possible, for illustration, to look for a noun phrase (syntax tier) marked as topic (information structure tier) that is in a bridging relation (co-reference tier) to some other noun phrase.', '3.2 Stochastic rhetorical analysis.', 'In an experiment on automatic rhetorical parsing, the RST-annotations and PoS tags were used by (Reitter 2003) as a training corpus for statistical classification with Support Vector Machines.', 'Since 170 annotated texts constitute a fairly small training set, Reitter found that an overall recognition accuracy of 39% could be achieved using his method.', 'For the English RST-annotated corpus that is made available via LDC, his corresponding result is 62%.', 'Future work along these lines will incorporate other layers of annotation, in particular the syntax information.', '9 www.ling.unipotsdam.de/sfb/ Figure 2: Screenshot of Annis Linguistic Database 3.3 Symbolic and knowledge-based.', 'rhetorical analysis We are experimenting with a hybrid statistical and knowledge-based system for discourse parsing and summarization (Stede 2003), (Hanneforth et al. 2003), again targeting the genre of commentaries.', 'The idea is to have a pipeline of shallow-analysis modules (tagging, chunk- ing, discourse parsing based on connectives) and map the resulting underspecified rhetorical tree (see Section 2.4) into a knowledge base that may contain domain and world knowledge for enriching the representation, e.g., to resolve references that cannot be handled by shallow methods, or to hypothesize coherence relations.', 'In the rhetorical tree, nuclearity information is then used to extract a â\x80\x9ckernel treeâ\x80\x9d that supposedly represents the key information from which the summary can be generated (which in turn may involve co-reference information, as we want to avoid dangling pronouns in a summary).', 'Thus we are interested not in extraction, but actual generation from representations that may be developed to different degrees of granularity.', 'In order to evaluate and advance this approach, it helps to feed into the knowledge base data that is already enriched with some of the desired information â\x80\x94 as in PCC.', 'That is, we can use the discourse parser on PCC texts, emulating for instance a â\x80\x9cco-reference oracleâ\x80\x9d that adds the information from our co-reference annotations.', 'The knowledge base then can be tested for its relation-inference capabilities on the basis of full-blown co-reference information.', 'Conversely, we can use the full rhetorical tree from the annotations and tune the co-reference module.', 'The general idea for the knowledge- based part is to have the system use as much information as it can find at its disposal to produce a target representation as specific as possible and as underspecified as necessary.', 'For developing these mechanisms, the possibility to feed in hand-annotated information is very useful.', '3.4 Salience-based text generation.', 'Text generation, or at least the two phases of text planning and sentence planning, is a process driven partly by well-motivated choices (e.g., use this lexeme X rather than that more colloquial near-synonym Y ) and partly by con tation like that of PCC can be exploited to look for correlations in particular between syntactic structure, choice of referring expressions, and sentence-internal information structure.', 'A different but supplementary perspective on discourse-based information structure is taken 11ventionalized patterns (e.g., order of informa by one of our partner projects, which is inter tion in news reports).', 'And then there are decisions that systems typically hard-wire, because the linguistic motivation for making them is not well understood yet.', 'Preferences for constituent order (especially in languages with relatively free word order) often belong to this group.', 'Trying to integrate constituent ordering and choice of referring expressions, (Chiarcos 2003) developed a numerical model of salience propagation that captures various factors of authorâ\x80\x99s intentions and of information structure for ordering sentences as well as smaller constituents, and picking appropriate referring expressions.10 Chiarcos used the PCC annotations of co-reference and information structure to compute his numerical models for salience projection across the generated texts.', '3.5 Improved models of discourse.', 'structure Besides the applications just sketched, the over- arching goal of developing the PCC is to build up an empirical basis for investigating phenomena of discourse structure.', 'One key issue here is to seek a discourse-based model of information structure.', 'Since DaneË\x87sâ\x80\x99 proposals of â\x80\x98thematic development patternsâ\x80\x99, a few suggestions have been made as to the existence of a level of discourse structure that would predict the information structure of sentences within texts.', '(Hartmann 1984), for example, used the term Reliefgebung to characterize the distibution of main and minor information in texts (similar to the notion of nuclearity in RST).', '(Brandt 1996) extended these ideas toward a conception of kommunikative Gewichtung (â\x80\x98communicative-weight assignmentâ\x80\x99).', 'A different notion of information structure, is used in work such as that of (?), who tried to characterize felicitous constituent ordering (theme choice, in particular) that leads to texts presenting information in a natural, â\x80\x9cflowingâ\x80\x9d way rather than with abrupt shifts of attention.', 'â\x80\x94ested in correlations between prosody and dis course structure.', 'A number of PCC commentaries will be read by professional news speakers and prosodic features be annotated, so that the various annotation layers can be set into correspondence with intonation patterns.', 'In focus is in particular the correlation with rhetorical structure, i.e., the question whether specific rhetorical relations â\x80\x94 or groups of relations in particular configurations â\x80\x94 are signalled by speakers with prosodic means.', 'Besides information structure, the second main goal is to enhance current models of rhetorical structure.', 'As already pointed out in Section 2.4, current theories diverge not only on the number and definition of relations but also on apects of structure, i.e., whether a tree is sufficient as a representational device or general graphs are required (and if so, whether any restrictions can be placed on these graphâ\x80\x99s structures â\x80\x94 cf.', '(Webber et al., 2003)).', 'Again, the idea is that having a picture of syntax, co-reference, and sentence-internal information structure at oneâ\x80\x99s disposal should aid in finding models of discourse structure that are more explanatory and can be empirically supported.', 'The PCC is not the result of a funded project.', 'Instead, the designs of the various annotation layers and the actual annotation work are results of a series of diploma theses, of studentsâ\x80\x99 work in course projects, and to some extent of paid assistentships.', 'This means that the PCC cannot grow particularly quickly.', 'After the first step towards breadth had been taken with the PoS-tagging, RST annotation, and URML conversion of the entire corpus of 170 texts12 , emphasis shifted towards depth.', 'Hence we decided to select ten commentaries to form a â\x80\x98core corpusâ\x80\x99, for which the entire range of annotation levels was realized, so that experiments with multi-level querying could commence.', 'Cur In order to ground such approaches in linguistic observation and description, a multi-level anno 10 For an exposition of the idea as applied to the task of text planning, see (Chiarcos, Stede 2004).', '11 www.ling.unipotsdam.de/sfb/projekt a3.php 12 This step was carried out in the course of the diploma thesis work of David Reitter (2003), which de serves special mention here.', 'rently, some annotations (in particular the connectives and scopes) have already moved beyond the core corpus; the others will grow step by step.', 'The kind of annotation work presented here would clearly benefit from the emergence of standard formats and tag sets, which could lead to sharable resources of larger size.', 'Clearly this poses a number of research challenges, though, such as the applicability of tag sets across different languages.', 'Nonetheless, the prospect of a network of annotated discourse resources seems particularly promising if not only a single annotation layer is used but a whole variety of them, so that a systematic search for correlations between them becomes possible, which in turn can lead to more explanatory models of discourse structure.']",abstractive -C02-1025,C02-1025,5,14,Their results show that their high performance NER use less training data than other systems.,"These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive -P08-1043_swastika,P08-1043,1,3,They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.,Here we propose a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.,"['A Single Generative Model for Joint Morphological Segmentation and Syntactic Parsing', 'Morphological processes in Semitic languages deliver space-delimited words which introduce multiple, distinct, syntactic units into the structure of the input sentence.', 'These words are in turn highly ambiguous, breaking the assumption underlying most parsers that the yield of a tree for a given sentence is known in advance.', 'Here we propose a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.', 'Using a treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling technique our model outperforms previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.', 'Current state-of-the-art broad-coverage parsers assume a direct correspondence between the lexical items ingrained in the proposed syntactic analyses (the yields of syntactic parse-trees) and the spacedelimited tokens (henceforth, ‘tokens’) that constitute the unanalyzed surface forms (utterances).', 'In Semitic languages the situation is very different.', 'In Modern Hebrew (Hebrew), a Semitic language with very rich morphology, particles marking conjunctions, prepositions, complementizers and relativizers are bound elements prefixed to the word (Glinert, 1989).', ""The Hebrew token ‘bcl’1, for example, stands for the complete prepositional phrase 'We adopt here the transliteration of (Sima’an et al., 2001)."", '“in the shadow”.', 'This token may further embed into a larger utterance, e.g., ‘bcl hneim’ (literally “in-the-shadow the-pleasant”, meaning roughly “in the pleasant shadow”) in which the dominated Noun is modified by a proceeding space-delimited adjective.', 'It should be clear from the onset that the particle b (“in”) in ‘bcl’ may then attach higher than the bare noun cl (“shadow”).', 'This leads to word- and constituent-boundaries discrepancy, which breaks the assumptions underlying current state-of-the-art statistical parsers.', 'One way to approach this discrepancy is to assume a preceding phase of morphological segmentation for extracting the different lexical items that exist at the token level (as is done, to the best of our knowledge, in all parsing related work on Arabic and its dialects (Chiang et al., 2006)).', 'The input for the segmentation task is however highly ambiguous for Semitic languages, and surface forms (tokens) may admit multiple possible analyses as in (BarHaim et al., 2007; Adler and Elhadad, 2006).', 'The aforementioned surface form bcl, for example, may also stand for the lexical item “onion”, a Noun.', 'The implication of this ambiguity for a parser is that the yield of syntactic trees no longer consists of spacedelimited tokens, and the expected number of leaves in the syntactic analysis in not known in advance.', 'Tsarfaty (2006) argues that for Semitic languages determining the correct morphological segmentation is dependent on syntactic context and shows that increasing information sharing between the morphological and the syntactic components leads to improved performance on the joint task.', 'Cohen and Smith (2007) followed up on these results and proposed a system for joint inference of morphological and syntactic structures using factored models each designed and trained on its own.', 'Here we push the single-framework conjecture across the board and present a single model that performs morphological segmentation and syntactic disambiguation in a fully generative framework.', 'We claim that no particular morphological segmentation is a-priory more likely for surface forms before exploring the compositional nature of syntactic structures, including manifestations of various long-distance dependencies.', 'Morphological segmentation decisions in our model are delegated to a lexeme-based PCFG and we show that using a simple treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling our model outperforms (Tsarfaty, 2006) and (Cohen and Smith, 2007) on the joint task and achieves state-of-the-art results on a par with current respective standalone models.2', 'Segmental morphology Hebrew consists of seven particles m(“from”) f(“when”/“who”/“that”) h(“the”) w(“and”) k(“like”) l(“to”) and b(“in”). which may never appear in isolation and must always attach as prefixes to the following open-class category item we refer to as stem.', 'Several such particles may be prefixed onto a single stem, in which case the affixation is subject to strict linear precedence constraints.', 'Co-occurrences among the particles themselves are subject to further syntactic and lexical constraints relative to the stem.', 'While the linear precedence of segmental morphemes within a token is subject to constraints, the dominance relations among their mother and sister constituents is rather free.', 'The relativizer f(“that”) for example, may attach to an arbitrarily long relative clause that goes beyond token boundaries.', 'The attachment in such cases encompasses a long distance dependency that cannot be captured by Markovian processes that are typically used for morphological disambiguation.', 'The same argument holds for resolving PP attachment of a prefixed preposition or marking conjunction of elements of any kind.', 'A less canonical representation of segmental morphology is triggered by a morpho-phonological process of omitting the definite article h when occurring after the particles b or l. This process triggers ambiguity as for the definiteness status of Nouns following these particles.We refer to such cases in which the concatenation of elements does not strictly correspond to the original surface form as super-segmental morphology.', 'An additional case of super-segmental morphology is the case of Pronominal Clitics.', 'Inflectional features marking pronominal elements may be attached to different kinds of categories marking their pronominal complements.', 'The additional morphological material in such cases appears after the stem and realizes the extended meaning.', 'The current work treats both segmental and super-segmental phenomena, yet we note that there may be more adequate ways to treat supersegmental phenomena assuming Word-Based morphology as we explore in (Tsarfaty and Goldberg, 2008).', 'Lexical and Morphological Ambiguity The rich morphological processes for deriving Hebrew stems give rise to a high degree of ambiguity for Hebrew space-delimited tokens.', 'The form fmnh, for example, can be understood as the verb “lubricated”, the possessed noun “her oil”, the adjective “fat” or the verb “got fat”.', 'Furthermore, the systematic way in which particles are prefixed to one another and onto an open-class category gives rise to a distinct sort of morphological ambiguity: space-delimited tokens may be ambiguous between several different segmentation possibilities.', 'The same form fmnh can be segmented as f-mnh, f (“that”) functioning as a reletivizer with the form mnh.', 'The form mnh itself can be read as at least three different verbs (“counted”, “appointed”, “was appointed”), a noun (“a portion”), and a possessed noun (“her kind”).', 'Such ambiguities cause discrepancies between token boundaries (indexed as white spaces) and constituent boundaries (imposed by syntactic categories) with respect to a surface form.', 'Such discrepancies can be aligned via an intermediate level of PoS tags.', 'PoS tags impose a unique morphological segmentation on surface tokens and present a unique valid yield for syntactic trees.', 'The correct ambiguity resolution of the syntactic level therefore helps to resolve the morphological one, and vice versa.', 'Morphological analyzers for Hebrew that analyze a surface form in isolation have been proposed by Segal (2000), Yona and Wintner (2005), and recently by the knowledge center for processing Hebrew (Itai et al., 2006).', 'Such analyzers propose multiple segmentation possibilities and their corresponding analyses for a token in isolation but have no means to determine the most likely ones.', 'Morphological disambiguators that consider a token in context (an utterance) and propose the most likely morphological analysis of an utterance (including segmentation) were presented by Bar-Haim et al. (2005), Adler and Elhadad (2006), Shacham and Wintner (2007), and achieved good results (the best segmentation result so far is around 98%).', 'The development of the very first Hebrew Treebank (Sima’an et al., 2001) called for the exploration of general statistical parsing methods, but the application was at first limited.', 'Sima’an et al. (2001) presented parsing results for a DOP tree-gram model using a small data set (500 sentences) and semiautomatic morphological disambiguation.', 'Tsarfaty (2006) was the first to demonstrate that fully automatic Hebrew parsing is feasible using the newly available 5000 sentences treebank.', 'Tsarfaty and Sima’an (2007) have reported state-of-the-art results on Hebrew unlexicalized parsing (74.41%) albeit assuming oracle morphological segmentation.', 'The joint morphological and syntactic hypothesis was first discussed in (Tsarfaty, 2006; Tsarfaty and Sima’an, 2004) and empirically explored in (Tsarfaty, 2006).', 'Tsarfaty (2006) used a morphological analyzer (Segal, 2000), a PoS tagger (Bar-Haim et al., 2005), and a general purpose parser (Schmid, 2000) in an integrated framework in which morphological and syntactic components interact to share information, leading to improved performance on the joint task.', 'Cohen and Smith (2007) later on based a system for joint inference on factored, independent, morphological and syntactic components of which scores are combined to cater for the joint inference task.', 'Both (Tsarfaty, 2006; Cohen and Smith, 2007) have shown that a single integrated framework outperforms a completely streamlined implementation, yet neither has shown a single generative model which handles both tasks.', 'A Hebrew surface token may have several readings, each of which corresponding to a sequence of segments and their corresponding PoS tags.', 'We refer to different readings as different analyses whereby the segments are deterministic given the sequence of PoS tags.', 'We refer to a segment and its assigned PoS tag as a lexeme, and so analyses are in fact sequences of lexemes.', 'For brevity we omit the segments from the analysis, and so analysis of the form “fmnh” as f/REL mnh/VB is represented simply as REL VB.', 'Such tag sequences are often treated as “complex tags” (e.g.', 'REL+VB) (cf.', '(Bar-Haim et al., 2007; Habash and Rambow, 2005)) and probabilities are assigned to different analyses in accordance with the likelihood of their tags (e.g., “fmnh is 30% likely to be tagged NN and 70% likely to be tagged REL+VB”).', 'Here we do not submit to this view.', 'When a token fmnh is to be interpreted as the lexeme sequence f/REL mnh/VB, the analysis introduces two distinct entities, the relativizer f (“that”) and the verb mnh (“counted”), and not as the complex entity “that counted”.', 'When the same token is to be interpreted as a single lexeme fmnh, it may function as a single adjective “fat”.', 'There is no relation between these two interpretations other then the fact that their surface forms coincide, and we argue that the only reason to prefer one analysis over the other is compositional.', 'A possible probabilistic model for assigning probabilities to complex analyses of a surface form may be and indeed recent sequential disambiguation models for Hebrew (Adler and Elhadad, 2006) and Arabic (Smith et al., 2005) present similar models.', 'We suggest that in unlexicalized PCFGs the syntactic context may be explicitly modeled in the derivation probabilities.', 'Hence, we take the probability of the event fmnh analyzed as REL VB to be This means that we generate f and mnh independently depending on their corresponding PoS tags, and the context (as well as the syntactic relation between the two) is modeled via the derivation resulting in a sequence REL VB spanning the form fmnh. based on linear context.', 'In our model, however, all lattice paths are taken to be a-priori equally likely.', 'We represent all morphological analyses of a given utterance using a lattice structure.', 'Each lattice arc corresponds to a segment and its corresponding PoS tag, and a path through the lattice corresponds to a specific morphological segmentation of the utterance.', 'This is by now a fairly standard representation for multiple morphological segmentation of Hebrew utterances (Adler, 2001; Bar-Haim et al., 2005; Smith et al., 2005; Cohen and Smith, 2007; Adler, 2007).', 'Figure 1 depicts the lattice for a 2-words sentence bclm hneim.', 'We use double-circles to indicate the space-delimited token boundaries.', 'Note that in our construction arcs can never cross token boundaries.', 'Every token is independent of the others, and the sentence lattice is in fact a concatenation of smaller lattices, one for each token.', 'Furthermore, some of the arcs represent lexemes not present in the input tokens (e.g. h/DT, fl/POS), however these are parts of valid analyses of the token (cf. super-segmental morphology section 2).', 'Segments with the same surface form but different PoS tags are treated as different lexemes, and are represented as separate arcs (e.g. the two arcs labeled neim from node 6 to 7).', 'A similar structure is used in speech recognition.', 'There, a lattice is used to represent the possible sentences resulting from an interpretation of an acoustic model.', 'In speech recognition the arcs of the lattice are typically weighted in order to indicate the probability of specific transitions.', 'Given that weights on all outgoing arcs sum up to one, weights induce a probability distribution on the lattice paths.', 'In sequential tagging models such as (Adler and Elhadad, 2006; Bar-Haim et al., 2007; Smith et al., 2005) weights are assigned according to a language model The input for the joint task is a sequence W = w1, ... , wn of space-delimited tokens.', 'Each token may admit multiple analyses, each of which a sequence of one or more lexemes (we use li to denote a lexeme) belonging a presupposed Hebrew lexicon LEX.', 'The entries in such a lexicon may be thought of as meaningful surface segments paired up with their PoS tags li = (si, pi), but note that a surface segment s need not be a space-delimited token.', 'The Input The set of analyses for a token is thus represented as a lattice in which every arc corresponds to a specific lexeme l, as shown in Figure 1.', 'A morphological analyzer M : W—* L is a function mapping sentences in Hebrew (W E W) to their corresponding lattices (M(W) = L E L).', 'We define the lattice L to be the concatenation of the lattices Li corresponding to the input words wi (s.t.', 'M(wi) = Li).', 'Each connected path (l1 ... lk) E L corresponds to one morphological segmentation possibility of W. The Parser Given a sequence of input tokens W = w1 ... wn and a morphological analyzer, we look for the most probable parse tree π s.t.', 'Since the lattice L for a given sentence W is determined by the morphological analyzer M we have which is precisely the formula corresponding to the so-called lattice parsing familiar from speech recognition.', 'Every parse π selects a specific morphological segmentation (l1...lk) (a path through the lattice).', 'This is akin to PoS tags sequences induced by different parses in the setup familiar from English and explored in e.g.', '(Charniak et al., 1996).', 'Our use of an unweighted lattice reflects our belief that all the segmentations of the given input sentence are a-priori equally likely; the only reason to prefer one segmentation over the another is due to the overall syntactic context which is modeled via the PCFG derivations.', 'A compatible view is presented by Charniak et al. (1996) who consider the kind of probabilities a generative parser should get from a PoS tagger, and concludes that these should be P(w|t) “and nothing fancier”.3 In our setting, therefore, the Lattice is not used to induce a probability distribution on a linear context, but rather, it is used as a common-denominator of state-indexation of all segmentations possibilities of a surface form.', 'This is a unique object for which we are able to define a proper probability model.', 'Thus our proposed model is a proper model assigning probability mass to all (7r, L) pairs, where 7r is a parse tree and L is the one and only lattice that a sequence of characters (and spaces) W over our alpha-beth gives rise to.', 'The Grammar Our parser looks for the most likely tree spanning a single path through the lattice of which the yield is a sequence of lexemes.', 'This is done using a simple PCFG which is lexemebased.', 'This means that the rules in our grammar are of two kinds: (a) syntactic rules relating nonterminals to a sequence of non-terminals and/or PoS tags, and (b) lexical rules relating PoS tags to lattice arcs (lexemes).', 'The possible analyses of a surface token pose constraints on the analyses of specific segments.', 'In order to pass these constraints onto the parser, the lexical rules in the grammar are of the form pi —* (si, pi) Parameter Estimation The grammar probabilities are estimated from the corpus using simple relative frequency estimates.', 'Lexical rules are estimated in a similar manner.', 'We smooth Prf(p —* (s, p)) for rare and OOV segments (s E l, l E L, s unseen) using a “per-tag” probability distribution over rare segments which we estimate using relative frequency estimates for once-occurring segments.', '3An English sentence with ambiguous PoS assignment can be trivially represented as a lattice similar to our own, where every pair of consecutive nodes correspond to a word, and every possible PoS assignment for this word is a connecting arc.', 'Handling Unknown tokens When handling unknown tokens in a language such as Hebrew various important aspects have to be borne in mind.', 'Firstly, Hebrew unknown tokens are doubly unknown: each unknown token may correspond to several segmentation possibilities, and each segment in such sequences may be able to admit multiple PoS tags.', 'Secondly, some segments in a proposed segment sequence may in fact be seen lexical events, i.e., for some p tag Prf(p —* (s, p)) > 0, while other segments have never been observed as a lexical event before.', 'The latter arcs correspond to OOV words in English.', 'Finally, the assignments of PoS tags to OOV segments is subject to language specific constraints relative to the token it was originated from.', 'Our smoothing procedure takes into account all the aforementioned aspects and works as follows.', 'We first make use of our morphological analyzer to find all segmentation possibilities by chopping off all prefix sequence possibilities (including the empty prefix) and construct a lattice off of them.', 'The remaining arcs are marked OOV.', 'At this stage the lattice path corresponds to segments only, with no PoS assigned to them.', 'In turn we use two sorts of heuristics, orthogonal to one another, to prune segmentation possibilities based on lexical and grammatical constraints.', 'We simulate lexical constraints by using an external lexical resource against which we verify whether OOV segments are in fact valid Hebrew lexemes.', 'This heuristics is used to prune all segmentation possibilities involving “lexically improper” segments.', 'For the remaining arcs, if the segment is in fact a known lexeme it is tagged as usual, but for the OOV arcs which are valid Hebrew entries lacking tags assignment, we assign all possible tags and then simulate a grammatical constraint.', 'Here, all tokeninternal collocations of tags unseen in our training data are pruned away.', 'From now on all lattice arcs are tagged segments and the assignment of probability P(p —* (s, p)) to lattice arcs proceeds as usual.4 A rather pathological case is when our lexical heuristics prune away all segmentation possibilities and we remain with an empty lattice.', 'In such cases we use the non-pruned lattice including all (possibly ungrammatical) segmentation, and let the statistics (including OOV) decide.', 'We empirically control for the effect of our heuristics to make sure our pruning does not undermine the objectives of our joint task.', 'Previous work on morphological and syntactic disambiguation in Hebrew used different sets of data, different splits, differing annotation schemes, and different evaluation measures.', 'Our experimental setup therefore is designed to serve two goals.', 'Our primary goal is to exploit the resources that are most appropriate for the task at hand, and our secondary goal is to allow for comparison of our models’ performance against previously reported results.', 'When a comparison against previous results requires additional pre-processing, we state it explicitly to allow for the reader to replicate the reported results.', 'Data We use the Hebrew Treebank, (Sima’an et al., 2001), provided by the knowledge center for processing Hebrew, in which sentences from the daily newspaper “Ha’aretz” are morphologically segmented and syntactically annotated.', 'The treebank has two versions, v1.0 and v2.0, containing 5001 and 6501 sentences respectively.', 'We use v1.0 mainly because previous studies on joint inference reported results w.r.t. v1.0 only.5 We expect that using the same setup on v2.0 will allow a crosstreebank comparison.6 We used the first 500 sentences as our dev set and the rest 4500 for training and report our main results on this split.', 'To facilitate the comparison of our results to those reported by (Cohen and Smith, 2007) we use their data set in which 177 empty and “malformed”7 were removed.', 'The first 3770 trees of the resulting set then were used for training, and the last 418 are used testing.', '(we ignored the 419 trees in their development set.)', 'Morphological Analyzer Ideally, we would use an of-the-shelf morphological analyzer for mapping each input token to its possible analyses.', 'Such resources exist for Hebrew (Itai et al., 2006), but unfortunately use a tagging scheme which is incompatible with the one of the Hebrew Treebank.s For this reason, we use a data-driven morphological analyzer derived from the training data similar to (Cohen and Smith, 2007).', 'We construct a mapping from all the space-delimited tokens seen in the training sentences to their corresponding analyses.', 'Lexicon and OOV Handling Our data-driven morphological-analyzer proposes analyses for unknown tokens as described in Section 5.', 'We use the HSPELL9 (Har’el and Kenigsberg, 2004) wordlist as a lexeme-based lexicon for pruning segmentations involving invalid segments.', 'Models that employ this strategy are denoted hsp.', 'To control for the effect of the HSPELL-based pruning, we also experimented with a morphological analyzer that does not perform this pruning.', 'For these models we limit the options provided for OOV words by not considering the entire token as a valid segmentation in case at least some prefix segmentation exists.', 'This analyzer setting is similar to that of (Cohen and Smith, 2007), and models using it are denoted nohsp, Parser and Grammar We used BitPar (Schmid, 2004), an efficient general purpose parser,10 together with various treebank grammars to parse the input sentences and propose compatible morphological segmentation and syntactic analysis.', 'We experimented with increasingly rich grammars read off of the treebank.', 'Our first model is GTplain, a PCFG learned from the treebank after removing all functional features from the syntactic categories.', 'In our second model GTvpi we also distinguished finite and non-finite verbs and VPs as 10Lattice parsing can be performed by special initialization of the chart in a CKY parser (Chappelier et al., 1999).', 'We currently simulate this by crafting a WCFG and feeding it to BitPar.', 'Given a PCFG grammar G and a lattice L with nodes n1 ... nk, we construct the weighted grammar GL as follows: for every arc (lexeme) l E L from node ni to node nj, we add to GL the rule [l --+ tni, tni+1, ... , tnj_1] with a probability of 1 (this indicates the lexeme l spans from node ni to node nj).', 'GL is then used to parse the string tn1 ... tnk_1, where tni is a terminal corresponding to the lattice span between node ni and ni+1.', 'Removing the leaves from the resulting tree yields a parse for L under G, with the desired probabilities.', 'We use a patched version of BitPar allowing for direct input of probabilities instead of counts.', 'We thank Felix Hageloh (Hageloh, 2006) for providing us with this version. proposed in (Tsarfaty, 2006).', 'In our third model GTppp we also add the distinction between general PPs and possessive PPs following Goldberg and Elhadad (2007).', 'In our forth model GTnph we add the definiteness status of constituents following Tsarfaty and Sima’an (2007).', 'Finally, model GTv = 2 includes parent annotation on top of the various state-splits, as is done also in (Tsarfaty and Sima’an, 2007; Cohen and Smith, 2007).', 'For all grammars, we use fine-grained PoS tags indicating various morphological features annotated therein.', 'Evaluation We use 8 different measures to evaluate the performance of our system on the joint disambiguation task.', 'To evaluate the performance on the segmentation task, we report SEG, the standard harmonic means for segmentation Precision and Recall F1 (as defined in Bar-Haim et al. (2005); Tsarfaty (2006)) as well as the segmentation accuracy SEGTok measure indicating the percentage of input tokens assigned the correct exact segmentation (as reported by Cohen and Smith (2007)).', 'SEGTok(noH) is the segmentation accuracy ignoring mistakes involving the implicit definite article h.11 To evaluate our performance on the tagging task we report CPOS and FPOS corresponding to coarse- and fine-grained PoS tagging results (F1) measure.', 'Evaluating parsing results in our joint framework, as argued by Tsarfaty (2006), is not trivial under the joint disambiguation task, as the hypothesized yield need not coincide with the correct one.', 'Our parsing performance measures (SY N) thus report the PARSEVAL extension proposed in Tsarfaty (2006).', 'We further report SYNCS, the parsing metric of Cohen and Smith (2007), to facilitate the comparison.', 'We report the F1 value of both measures.', 'Finally, our U (unparsed) measure is used to report the number of sentences to which our system could not propose a joint analysis.', 'The accuracy results for segmentation, tagging and parsing using our different models and our standard data split are summarized in Table 1.', 'In addition we report for each model its performance on goldsegmented input (GS) to indicate the upper bound 11Overt definiteness errors may be seen as a wrong feature rather than as wrong constituent and it is by now an accepted standard to report accuracy with and without such errors. for the grammars’ performance on the parsing task.', 'The table makes clear that enriching our grammar improves the syntactic performance as well as morphological disambiguation (segmentation and POS tagging) accuracy.', 'This supports our main thesis that decisions taken by single, improved, grammar are beneficial for both tasks.', 'When using the segmentation pruning (using HSPELL) for unseen tokens, performance improves for all tasks as well.', 'Yet we note that the better grammars without pruning outperform the poorer grammars using this technique, indicating that the syntactic context aids, to some extent, the disambiguation of unknown tokens.', 'Table 2 compares the performance of our system on the setup of Cohen and Smith (2007) to the best results reported by them for the same tasks.', 'We first note that the accuracy results of our system are overall higher on their setup, on all measures, indicating that theirs may be an easier dataset.', 'Secondly, for all our models we provide better fine- and coarse-grained POS-tagging accuracy, and all pruned models outperform the Oracle results reported by them.12 In terms of syntactic disambiguation, even the simplest grammar pruned with HSPELL outperforms their non-Oracle results.', 'Without HSPELL-pruning, our simpler grammars are somewhat lagging behind, but as the grammars improve the gap is bridged.', 'The addition of vertical markovization enables non-pruned models to outperform all previously reported re12Cohen and Smith (2007) make use of a parameter (α) which is tuned separately for each of the tasks.', 'This essentially means that their model does not result in a true joint inference, as executions for different tasks involve tuning a parameter separately.', 'In our model there are no such hyper-parameters, and the performance is the result of truly joint disambiguation. sults.', 'Furthermore, the combination of pruning and vertical markovization of the grammar outperforms the Oracle results reported by Cohen and Smith.', 'This essentially means that a better grammar tunes the joint model for optimized syntactic disambiguation at least in as much as their hyper parameters do.', 'An interesting observation is that while vertical markovization benefits all our models, its effect is less evident in Cohen and Smith.', 'On the surface, our model may seem as a special case of Cohen and Smith in which α = 0.', 'However, there is a crucial difference: the morphological probabilities in their model come from discriminative models based on linear context.', 'Many morphological decisions are based on long distance dependencies, and when the global syntactic evidence disagrees with evidence based on local linear context, the two models compete with one another, despite the fact that the PCFG takes also local context into account.', 'In addition, as the CRF and PCFG look at similar sorts of information from within two inherently different models, they are far from independent and optimizing their product is meaningless.', 'Cohen and Smith approach this by introducing the α hyperparameter, which performs best when optimized independently for each sentence (cf.', 'Oracle results).', 'In contrast, our morphological probabilities are based on a unigram, lexeme-based model, and all other (local and non-local) contextual considerations are delegated to the PCFG.', 'This fully generative model caters for real interaction between the syntactic and morphological levels as a part of a single coherent process.', 'Employing a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions is not only theoretically clean and linguistically justified and but also probabilistically apropriate and empirically sound.', 'The overall performance of our joint framework demonstrates that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperforms upper bounds proposed by previous joint disambiguation systems and achieves segmentation and parsing results on a par with state-of-the-art standalone applications results.', 'Better grammars are shown here to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.', 'We conjecture that this trend may continue by incorporating additional information, e.g., three-dimensional models as proposed by Tsarfaty and Sima’an (2007).', 'In the current work morphological analyses and lexical probabilities are derived from a small Treebank, which is by no means the best way to go.', 'Using a wide-coverage morphological analyzer based on (Itai et al., 2006) should cater for a better coverage, and incorporating lexical probabilities learned from a big (unannotated) corpus (cf.', '(Levinger et al., 1995; Goldberg et al., ; Adler et al., 2008)) will make the parser more robust and suitable for use in more realistic scenarios.', 'Acknowledgments We thank Meni Adler and Michael Elhadad (BGU) for helpful comments and discussion.', 'We further thank Khalil Simaan (ILLCUvA) for his careful advise concerning the formal details of the proposal.', 'The work of the first author was supported by the Lynn and William Frankel Center for Computer Sciences.', 'The work of the second author as well as collaboration visits to Israel was financed by NWO, grant number 017.001.271.']",abstractive -W99-0623_vardha,W99-0623,2,2,Two general approaches are presented and two combination techniques are described for each approach.,Two general approaches are presented and two combination techniques are described for each approach.,"['Exploiting Diversity in Natural Language Processing: Combining Parsers', 'Three state-of-the-art statistical parsers are combined to produce more accurate parses, as well as new bounds on achievable Treebank parsing accuracy.', 'Two general approaches are presented and two combination techniques are described for each approach.', 'Both parametric and non-parametric models are explored.', 'The resulting parsers surpass the best previously published performance results for the Penn Treebank.', 'The natural language processing community is in the strong position of having many available approaches to solving some of its most fundamental problems.', 'The machine learning community has been in a similar situation and has studied the combination of multiple classifiers (Wolpert, 1992; Heath et al., 1996).', 'Their theoretical finding is simply stated: classification error rate decreases toward the noise rate exponentially in the number of independent, accurate classifiers.', 'The theory has also been validated empirically.', 'Recently, combination techniques have been investigated for part of speech tagging with positive results (van Halteren et al., 1998; Brill and Wu, 1998).', 'In both cases the investigators were able to achieve significant improvements over the previous best tagging results.', 'Similar advances have been made in machine translation (Frederking and Nirenburg, 1994), speech recognition (Fiscus, 1997) and named entity recognition (Borthwick et al., 1998).', 'The corpus-based statistical parsing community has many fast and accurate automated parsing systems, including systems produced by Collins (1997), Charniak (1997) and Ratnaparkhi (1997).', 'These three parsers have given the best reported parsing results on the Penn Treebank Wall Street Journal corpus (Marcus et al., 1993).', 'We used these three parsers to explore parser combination techniques.', 'We are interested in combining the substructures of the input parses to produce a better parse.', 'We call this approach parse hybridization.', 'The substructures that are unanimously hypothesized by the parsers should be preserved after combination, and the combination technique should not foolishly create substructures for which there is no supporting evidence.', 'These two principles guide experimentation in this framework, and together with the evaluation measures help us decide which specific type of substructure to combine.', 'The precision and recall measures (described in more detail in Section 3) used in evaluating Treebank parsing treat each constituent as a separate entity, a minimal unit of correctness.', 'Since our goal is to perform well under these measures we will similarly treat constituents as the minimal substructures for combination.', ""One hybridization strategy is to let the parsers vote on constituents' membership in the hypothesized set."", 'If enough parsers suggest that a particular constituent belongs in the parse, we include it.', 'We call this technique constituent voting.', 'We include a constituent in our hypothesized parse if it appears in the output of a majority of the parsers.', 'In our particular case the majority requires the agreement of only two parsers because we have only three.', 'This technique has the advantage of requiring no training, but it has the disadvantage of treating all parsers equally even though they may have differing accuracies or may specialize in modeling different phenomena.', 'Another technique for parse hybridization is to use a naïve Bayes classifier to determine which constituents to include in the parse.', 'The development of a naïve Bayes classifier involves learning how much each parser should be trusted for the decisions it makes.', 'Our original hope in combining these parsers is that their errors are independently distributed.', 'This is equivalent to the assumption used in probability estimation for naïve Bayes classifiers, namely that the attribute values are conditionally independent when the target value is given.', 'For this reason, naïve Bayes classifiers are well-matched to this problem.', 'In Equations 1 through 3 we develop the model for constructing our parse using naïve Bayes classification.', 'C is the union of the sets of constituents suggested by the parsers. r(c) is a binary function returning t (for true) precisely when the constituent c E C should be included in the hypothesis.', 'Mi(c) is a binary function returning t when parser i (from among the k parsers) suggests constituent c should be in the parse.', 'The hypothesized parse is then the set of constituents that are likely (P > 0.5) to be in the parse according to this model.', 'The estimation of the probabilities in the model is carried out as shown in Equation 4.', 'Here NO counts the number of hypothesized constituents in the development set that match the binary predicate specified as an argument.', 'Under certain conditions the constituent voting and naïve Bayes constituent combination techniques are guaranteed to produce sets of constituents with no crossing brackets.', 'There are simply not enough votes remaining to allow any of the crossing structures to enter the hypothesized constituent set.', 'Lemma: If the number of votes required by constituent voting is greater than half of the parsers under consideration the resulting structure has no crossing constituents.', 'IL+-1Proof: Assume a pair of crossing constituents appears in the output of the constituent voting technique using k parsers.', 'Call the crossing constituents A and B.', 'A receives a votes, and B receives b votes.', 'Each of the constituents must have received at least 1 votes from the k parsers, so a > I1 and 2 — 2k±-1 b > ri-5-111.', 'Let s = a + b.', 'None of the parsers produce parses with crossing brackets, so none of them votes for both of the assumed constituents.', 'Hence, s < k. But by addition of the votes on the two parses, s > 2N-11> k, a contradiction.', '• Similarly, when the naïve Bayes classifier is configured such that the constituents require estimated probabilities strictly larger than 0.5 to be accepted, there is not enough probability mass remaining on crossing brackets for them to be included in the hypothesis.', 'In general, the lemma of the previous section does not ensure that all the productions in the combined parse are found in the grammars of the member parsers.', 'There is a guarantee of no crossing brackets but there is no guarantee that a constituent in the tree has the same children as it had in any of the three original parses.', 'One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.', 'This drastic tree manipulation is not appropriate for situations in which we want to assign particular structures to sentences.', 'For example, we may have semantic information (e.g. database query operations) associated with the productions in a grammar.', 'If the parse contains productions from outside our grammar the machine has no direct method for handling them (e.g. the resulting database query may be syntactically malformed).', 'We have developed a general approach for combining parsers when preserving the entire structure of a parse tree is important.', 'The combining algorithm is presented with the candidate parses and asked to choose which one is best.', 'The combining technique must act as a multi-position switch indicating which parser should be trusted for the particular sentence.', 'We call this approach parser switching.', 'Once again we present both a non-parametric and a parametric technique for this task.', 'First we present the non-parametric version of parser switching, similarity switching: The intuition for this technique is that we can measure a similarity between parses by counting the constituents they have in common.', 'We pick the parse that is most similar to the other parses by choosing the one with the highest sum of pairwise similarities.', 'This is the parse that is closest to the centroid of the observed parses under the similarity metric.', 'The probabilistic version of this procedure is straightforward: We once again assume independence among our various member parsers.', 'Furthermore, we know one of the original parses will be the hypothesized parse, so the direct method of determining which one is best is to compute the probability of each of the candidate parses using the probabilistic model we developed in Section 2.1.', 'We model each parse as the decisions made to create it, and model those decisions as independent events.', 'Each decision determines the inclusion or exclusion of a candidate constituent.', 'The set of candidate constituents comes from the union of all the constituents suggested by the member parsers.', 'This is summarized in Equation 5.', 'The computation of Pfr1(c)1Mi M k (C)) has been sketched before in Equations 1 through 4.', ""In this case we are interested in finding' the maximum probability parse, ri, and Mi is the set of relevant (binary) parsing decisions made by parser i. ri is a parse selected from among the outputs of the individual parsers."", 'It is chosen such that the decisions it made in including or excluding constituents are most probable under the models for all of the parsers.', 'The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank, leaving only sections 22 and 23 completely untouched during the development of any of the parsers.', 'We used section 23 as the development set for our combining techniques, and section 22 only for final testing.', 'The development set contained 44088 constituents in 2416 sentences and the test set contained 30691 constituents in 1699 sentences.', ""A sentence was withheld from section 22 because its extreme length was troublesome for a couple of the parsers.'"", 'The standard measures for evaluating Penn Treebank parsing performance are precision and recall of the predicted constituents.', 'Each parse is converted into a set of constituents represented as a tuples: (label, start, end).', 'The set is then compared with the set generated from the Penn Treebank parse to determine the precision and recall.', 'Precision is the portion of hypothesized constituents that are correct and recall is the portion of the Treebank constituents that are hypothesized.', 'For our experiments we also report the mean of precision and recall, which we denote by (P + R)I2 and F-measure.', 'F-measure is the harmonic mean of precision and recall, 2PR/(P + R).', 'It is closer to the smaller value of precision and recall when there is a large skew in their values.', 'We performed three experiments to evaluate our techniques.', 'The first shows how constituent features and context do not help in deciding which parser to trust.', 'We then show that the combining techniques presented above give better parsing accuracy than any of the individual parsers.', 'Finally we show the combining techniques degrade very little when a poor parser is added to the set.', 'It is possible one could produce better models by introducing features describing constituents and their contexts because one parser could be much better than the majority of the others in particular situations.', 'For example, one parser could be more accurate at predicting noun phrases than the other parsers.', 'None of the models we have presented utilize features associated with a particular constituent (i.e. the label, span, parent label, etc.) to influence parser preference.', 'This is not an oversight.', 'Features and context were initially introduced into the models, but they refused to offer any gains in performance.', 'While we cannot prove there are no such useful features on which one should condition trust, we can give some insight into why the features we explored offered no gain.', 'Because we are working with only three parsers, the only situation in which context will help us is when it can indicate we should choose to believe a single parser that disagrees with the majority hypothesis instead of the majority hypothesis itself.', 'This is the only important case, because otherwise the simple majority combining technique would pick the correct constituent.', 'One side of the decision making process is when we choose to believe a constituent should be in the parse, even though only one parser suggests it.', 'We call such a constituent an isolated constituent.', 'If we were working with more than three parsers we could investigate minority constituents, those constituents that are suggested by at least one parser, but which the majority of the parsers do not suggest.', 'Adding the isolated constituents to our hypothesis parse could increase our expected recall, but in the cases we investigated it would invariably hurt our precision more than we would gain on recall.', 'Consider for a set of constituents the isolated constituent precision parser metric, the portion of isolated constituents that are correctly hypothesized.', ""When this metric is less than 0.5, we expect to incur more errors' than we will remove by adding those constituents to the parse."", 'We show the results of three of the experiments we conducted to measure isolated constituent precision under various partitioning schemes.', 'In Table 1 we see with very few exceptions that the isolated constituent precision is less than 0.5 when we use the constituent label as a feature.', 'The counts represent portions of the approximately 44000 constituents hypothesized by the parsers in the development set.', 'In the cases where isolated constituent precision is larger than 0.5 the affected portion of the hypotheses is negligible.', 'Similarly Figures 1 and 2 show how the isolated constituent precision varies by sentence length and the size of the span of the hypothesized constituent.', 'In each figure the upper graph shows the isolated constituent precision and the bottom graph shows the corresponding number of hypothesized constituents.', 'Again we notice that the isolated constituent precision is larger than 0.5 only in those partitions that contain very few samples.', 'From this we see that a finer-grained model for parser combination, at least for the features we have examined, will not give us any additional power.', 'The results in Table 2 were achieved on the development set.', 'The first two rows of the table are baselines.', 'The first row represents the average accuracy of the three parsers we combine.', ""The second row is the accuracy of the best of the three parsers.'"", 'The next two rows are results of oracle experiments.', 'The parser switching oracle is the upper bound on the accuracy that can be achieved on this set in the parser switching framework.', 'It is the performance we could achieve if an omniscient observer told us which parser to pick for each of the sentences.', 'The maximum precision row is the upper bound on accuracy if we could pick exactly the correct constituents from among the constituents suggested by the three parsers.', 'Another way to interpret this is that less than 5% of the correct constituents are missing from the hypotheses generated by the union of the three parsers.', 'The maximum precision oracle is an upper bound on the possible gain we can achieve by parse hybridization.', 'We do not show the numbers for the Bayes models in Table 2 because the parameters involved were established using this set.', 'The precision and recall of similarity switching and constituent voting are both significantly better than the best individual parser, and constituent voting is significantly better than parser switching in precision.4 Constituent voting gives the highest accuracy for parsing the Penn Treebank reported to date.', 'Table 3 contains the results for evaluating our systems on the test set (section 22).', 'All of these systems were run on data that was not seen during their development.', 'The difference in precision between similarity and Bayes switching techniques is significant, but the difference in recall is not.', 'This is the first set that gives us a fair evaluation of the Bayes models, and the Bayes switching model performs significantly better than its non-parametric counterpart.', 'The constituent voting and naïve Bayes techniques are equivalent because the parameters learned in the training set did not sufficiently discriminate between the three parsers.', 'Table 4 shows how much the Bayes switching technique uses each of the parsers on the test set.', 'Parser 3, the most accurate parser, was chosen 71% of the time, and Parser 1, the least accurate parser was chosen 16% of the time.', 'Ties are rare in Bayes switching because the models are fine-grained — many estimated probabilities are involved in each decision.', 'In the interest of testing the robustness of these combining techniques, we added a fourth, simple nonlexicalized PCFG parser.', 'The PCFG was trained from the same sections of the Penn Treebank as the other three parsers.', 'It was then tested on section 22 of the Treebank in conjunction with the other parsers.', 'The results of this experiment can be seen in Table 5.', 'The entries in this table can be compared with those of Table 3 to see how the performance of the combining techniques degrades in the presence of an inferior parser.', 'As seen by the drop in average individual parser performance baseline, the introduced parser does not perform very well.', 'The average individual parser accuracy was reduced by more than 5% when we added this new parser, but the precision of the constituent voting technique was the only result that decreased significantly.', 'The Bayes models were able to achieve significantly higher precision than their non-parametric counterparts.', 'We see from these results that the behavior of the parametric techniques are robust in the presence of a poor parser.', 'Surprisingly, the non-parametric switching technique also exhibited robust behaviour in this situation.', 'We have presented two general approaches to studying parser combination: parser switching and parse hybridization.', 'For each experiment we gave an nonparametric and a parametric technique for combining parsers.', 'All four of the techniques studied result in parsing systems that perform better than any previously reported.', 'Both of the switching techniques, as well as the parametric hybridization technique were also shown to be robust when a poor parser was introduced into the experiments.', 'Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.', 'Combining multiple highly-accurate independent parsers yields promising results.', 'We plan to explore more powerful techniques for exploiting the diversity of parsing methods.', 'We would like to thank Eugene Charniak, Michael Collins, and Adwait Ratnaparkhi for enabling all of this research by providing us with their parsers and helpful comments.', 'This work was funded by NSF grant IRI-9502312.', 'Both authors are members of the Center for Language and Speech Processing at Johns Hopkins University.']",extractive -J96-3004,J96-3004,5,70,The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.,"This latter evaluation compares the performance of the system with that of several human judges since, as we shall show, even people do not agree on a single correct way to segment a text.","['A Stochastic Finite-State Word-Segmentation Algorithm for Chinese', 'The initial stage of text analysis for any NLP task usually involves the tokenization of the input into words.', ' For languages like English one can assume, to a first approximation, that word boundaries are given by whitespace or punctuation.', ' In various Asian languages, including Chinese, on the other hand, whitespace is never used to delimit words, so one must resort to lexical information to ""reconstruct"" the word-boundary information.', ' In this paper we present a stochastic finite-state model wherein the basic workhorse is the weighted finite-state transducer.', ' The model segments Chinese text into dictionary entries and words derived by various productive lexical processes, and--since the primary intended application of this model is to text-to-speech synthesis--provides pronunciations for these words.', ' We evaluate the system\'s performance by comparing its segmentation \'Tudgments"" with the judgments of a pool of human segmenters, and the system is shown to perform quite well.', 'Any NLP application that presumes as input unrestricted text requires an initial phase of text analysis; such applications involve problems as diverse as machine translation, information retrieval, and text-to-speech synthesis (TIS).', 'An initial step of any textÂ\xad analysis task is the tokenization of the input into words.', 'For a language like English, this problem is generally regarded as trivial since words are delimited in English text by whitespace or marks of punctuation.', ""Thus in an English sentence such as I'm going to show up at the ACL one would reasonably conjecture that there are eight words separated by seven spaces."", ""A moment's reflection will reveal that things are not quite that simple."", ""There are clearly eight orthographic words in the example given, but if one were doing syntactic analysis one would probably want to consider I'm to consist of two syntactic words, namely I and am."", 'If one is interested in translation, one would probably want to consider show up as a single dictionary word since its semantic interpretation is not trivially derivable from the meanings of show and up.', ""And if one is interested in TIS, one would probably consider the single orthographic word ACL to consist of three phonological words-lei s'i d/-corresponding to the pronunciation of each of the letters in the acronym."", 'Space- or punctuation-delimited * 700 Mountain Avenue, 2d451, Murray Hill, NJ 07974, USA.', 'Email: rlls@bell-labs.', 'com t 700 Mountain Avenue, 2d451, Murray Hill, NJ 07974, USA.', 'Email: cls@bell-labs.', 'com t 600 Mountain Avenue, 2c278, Murray Hill, NJ 07974, USA.', 'Email: gale@research.', 'att.', ""com §Cambridge, UK Email: nc201@eng.cam.ac.uk © 1996 Association for Computational Linguistics (a) B ) ( , : & ; ? ' H o w d o y o u s a y o c t o p u s i n J a p a n e s e ? ' (b) P l a u s i b l e S e g m e n t a t i o n I B X I I 1 : & I 0 0 r i 4 w e n 2 z h a n g l y u 2 z e n 3 m e 0 s h u o l ' J a p a n e s e ' ' o c t o p u s ' ' h o w ' ' s a y ' (c) Figure 1 I m p l a u s i b l e S e g m e n t a t i o n [§] lxI 1:&I ri4 wen2 zhangl yu2zen3 me0 shuol 'Japan' 'essay' 'fish' 'how' 'say' A Chinese sentence in (a) illustrating the lack of word boundaries."", 'In (b) is a plausible segmentation for this sentence; in (c) is an implausible segmentation.', 'orthographic words are thus only a starting point for further analysis and can only be regarded as a useful hint at the desired division of the sentence into words.', 'Whether a language even has orthographic words is largely dependent on the writing system used to represent the language (rather than the language itself); the notion ""orthographic word"" is not universal.', 'Most languages that use Roman, Greek, Cyrillic, Armenian, or Semitic scripts, and many that use Indian-derived scripts, mark orthographic word boundaries; however, languages written in a Chinese-derived writÂ\xad ing system, including Chinese and Japanese, as well as Indian-derived writing systems of languages like Thai, do not delimit orthographic words.1 Put another way, written Chinese simply lacks orthographic words.', 'In Chinese text, individual characters of the script, to which we shall refer by their traditional name of hanzi,Z are written one after another with no intervening spaces; a Chinese sentence is shown in Figure 1.3 Partly as a result of this, the notion ""word"" has never played a role in Chinese philological tradition, and the idea that Chinese lacks anyÂ\xad thing analogous to words in European languages has been prevalent among Western sinologists; see DeFrancis (1984).', 'Twentieth-century linguistic work on Chinese (Chao 1968; Li and Thompson 1981; Tang 1988,1989, inter alia) has revealed the incorrectness of this traditional view.', 'All notions of word, with the exception of the orthographic word, are as relevant in Chinese as they are in English, and just as is the case in other languages, a word in Chinese may correspond to one or more symbols in the orthog 1 For a related approach to the problem of word-segrnention in Japanese, see Nagata (1994), inter alia..', ""2 Chinese ?l* han4zi4 'Chinese character'; this is the same word as Japanese kanji.."", '3 Throughout this paper we shall give Chinese examples in traditional orthography, followed.', 'immediately by a Romanization into the pinyin transliteration scheme; numerals following each pinyin syllable represent tones.', 'Examples will usually be accompanied by a translation, plus a morpheme-by-morpheme gloss given in parentheses whenever the translation does not adequately serve this purpose.', 'In the pinyin transliterations a dash(-) separates syllables that may be considered part of the same phonological word; spaces are used to separate plausible phonological words; and a plus sign (+) is used, where relevant, to indicate morpheme boundaries of interest.', ""raphy: A ren2 'person' is a fairly uncontroversial case of a monographemic word, and rplil zhong1guo2 (middle country) 'China' a fairly uncontroversial case of a diÂ\xad graphernic word."", ""The relevance of the distinction between, say, phonological words and, say, dictionary words is shown by an example like rpftl_A :;!:Hfllil zhong1hua2 ren2min2 gong4he2-guo2 (China people republic) 'People's Republic of China.'"", 'Arguably this consists of about three phonological words.', 'On the other hand, in a translation system one probably wants to treat this string as a single dictionary word since it has a conventional and somewhat unpredictable translation into English.', 'Thus, if one wants to segment words-for any purpose-from Chinese sentences, one faces a more difficult task than one does in English since one cannot use spacing as a guide.', 'For example, suppose one is building a ITS system for Mandarin Chinese.', 'For that application, at a minimum, one would want to know the phonological word boundaries.', 'Now, for this application one might be tempted to simply bypass the segmentation problem and pronounce the text character-by-character.', 'However, there are several reasons why this approach will not in general work: 1.', 'Many hanzi have more than one pronunciation, where the correct.', ""pronunciation depends upon word affiliation: tfJ is pronounced deO when it is a prenominal modification marker, but di4 in the word §tfJ mu4di4 'goal'; fl; is normally ganl 'dry,' but qian2 in a person's given name."", 'including Third Tone Sandhi (Shih 1986), which changes a 3 (low) tone into a 2 (rising) tone before another 3 tone: \'j"";gil, xiao3 [lao3 shu3] \'little rat,\' becomes xiao3 { lao2shu3 ], rather than xiao2 { lao2shu3 ], because the rule first applies within the word lao3shu3 \'rat,\' blocking its phrasal application.', '3.', 'In various dialects of Mandarin certain phonetic rules apply at the word.', 'level.', ""For example, in Northern dialects (such as Beijing), a full tone (1, 2, 3, or 4) is changed to a neutral tone (0) in the final syllable of many words: Jll donglgual 'winter melon' is often pronounced donglguaO."", 'The high 1 tone of J1l would not normally neutralize in this fashion if it were functioning as a word on its own.', '4.', 'TIS systems in general need to do more than simply compute the.', 'pronunciations of individual words; they also need to compute intonational phrase boundaries in long utterances and assign relative prominence to words in those utterances.', 'It has been shown for English (Wang and Hirschberg 1992; Hirschberg 1993; Sproat 1994, inter alia) that grammatical part of speech provides useful information for these tasks.', 'Given that part-of-speech labels are properties of words rather than morphemes, it follows that one cannot do part-of-speech assignment without having access to word-boundary information.', 'Making the reasonable assumption that similar information is relevant for solving these problems in Chinese, it follows that a prerequisite for intonation-boundary assignment and prominence assignment is word segmentation.', ""The points enumerated above are particularly related to ITS, but analogous arguments can easily be given for other applications; see for example Wu and Tseng's (1993) discussion of the role of segmentation in information retrieval."", 'There are thus some very good reasons why segmentation into words is an important task.', 'A minimal requirement for building a Chinese word segmenter is obviously a dictionary; furthermore, as has been argued persuasively by Fung and Wu (1994), one will perform much better at segmenting text by using a dictionary constructed with text of the same genre as the text to be segmented.', 'For novel texts, no lexicon that consists simply of a list of word entries will ever be entirely satisfactory, since the list will inevitably omit many constructions that should be considered words.', 'Among these are words derived by various productive processes, including: 1.', 'Morphologically derived words such as, xue2shengl+men0.', ""(student+plural) 'students,' which is derived by the affixation of the plural affix f, menD to the nounxue2shengl."", '2.', ""Personal names such as 00, 3R; zhoulenl-lai2 'Zhou Enlai.'"", 'Of course, we.', ""can expect famous names like Zhou Enlai's to be in many dictionaries, but names such as :fi lf;f; shi2jil-lin2, the name of the second author of this paper, will not be found in any dictionary."", ""'Malaysia.'"", ""Again, famous place names will most likely be found in the dictionary, but less well-known names, such as 1PM± R; bu4lang3-shi4wei2-ke4 'Brunswick' (as in the New Jersey town name 'New Brunswick') will not generally be found."", 'In this paper we present a stochastic finite-state model for segmenting Chinese text into words, both words found in a (static) lexicon as well as words derived via the above-mentioned productive processes.', ""The segmenter handles the grouping of hanzi into words and outputs word pronunciations, with default pronunciations for hanzi it cannot group; we focus here primarily on the system's ability to segment text appropriately (rather than on its pronunciation abilities)."", 'The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.', 'It also incorporates the Good-Turing method (Baayen 1989; Church and Gale 1991) in estimating the likelihoods of previously unseen conÂ\xad structions, including morphological derivatives and personal names.', 'We will evaluate various specific aspects of the segmentation, as well as the overall segmentation perÂ\xad formance.', 'This latter evaluation compares the performance of the system with that of several human judges since, as we shall show, even people do not agree on a single correct way to segment a text.', 'Finally, this effort is part of a much larger program that we are undertaking to develop stochastic finite-state methods for text analysis with applications to TIS and other areas; in the final section of this paper we will briefly discuss this larger program so as to situate the work discussed here in a broader context.', '2.', 'A Brief Introduction to the Chinese Writing System Most readers will undoubtedly be at least somewhat familiar with the nature of the Chinese writing system, but there are enough common misunderstandings that it is as well to spend a few paragraphs on properties of the Chinese script that will be relevant to topics discussed in this paper.', 'The first point we need to address is what type of linguistic object a hanzi repreÂ\xad sents.', 'Much confusion has been sown about Chinese writing by the use of the term ideograph, suggesting that hanzi somehow directly represent ideas.', 'The most accurate characterization of Chinese writing is that it is morphosyllabic (DeFrancis 1984): each hanzi represents one morpheme lexically and semantically, and one syllable phonologiÂ\xad cally.', ""Thus in a two-hanzi word like lflli?J zhong1guo2 (middle country) 'China' there are two syllables, and at the same time two morphemes."", ""Of course, since the number of attested (phonemic) Mandarin syllables (roughly 1400, including tonal distinctions) is far smaller than the number of morphemes, it follows that a given syllable could in principle be written with any of several different hanzi, depending upon which morpheme is intended: the syllable zhongl could be lfl 'middle,''clock,''end,' or ,'loyal.'"", 'A morpheme, on the other hand, usually corresponds to a unique hanzi, though there are a few cases where variant forms are found.', 'Finally, quite a few hanzi are homographs, meaning that they may be pronounced in several different ways, and in extreme cases apparently represent different morphemes: The prenominal modifiÂ\xad cation marker eg deO is presumably a different morpheme from the second morpheme of §eg mu4di4, even though they are written the same way.4 The second point, which will be relevant in the discussion of personal names in Section 4.4, relates to the internal structure of hanzi.', ""Following the system devised under the Qing emperor Kang Xi, hanzi have traditionally been classified according to a set of approximately 200 semantic radicals; members of a radical class share a particular structural component, and often also share a common meaning (hence the term 'semantic')."", ""For example, hanzi containing the INSECT radical !R tend to denote insects and other crawling animals; examples include tr wal 'frog,' feng1 'wasp,' and !Itt she2 'snake.'"", ""Similarly, hanzi sharing the GHOST radical _m tend to denote spirits and demons, such as _m gui3 'ghost' itself, II: mo2 'demon,' and yan3 'nightmare.'"", 'While the semantic aspect of radicals is by no means completely predictive, the semantic homogeneity of many classes is quite striking: for example 254 out of the 263 examples (97%) of the INSECT class listed by Wieger (1965, 77376) denote crawling or invertebrate animals; similarly 21 out of the 22 examples (95%) of the GHOST class (page 808) denote ghosts or spirits.', 'As we shall argue, the semantic class affiliation of a hanzi constitutes useful information in predicting its properties.', '3.', 'Previous Work.', 'There is a sizable literature on Chinese word segmentation: recent reviews include Wang, Su, and Mo (1990) and Wu and Tseng (1993).', 'Roughly speaking, previous work can be divided into three categories, namely purely statistical approaches, purely lexiÂ\xad cal rule-based approaches, and approaches that combine lexical information with staÂ\xad tistical information.', 'The present proposal falls into the last group.', 'Purely statistical approaches have not been very popular, and so far as we are aware earlier work by Sproat and Shih (1990) is the only published instance of such an approach.', 'In that work, mutual information was used to decide whether to group adjacent hanzi into two-hanzi words.', 'Mutual information was shown to be useful in the segmentation task given that one does not have a dictionary.', 'A related point is that mutual information is helpful in augmenting existing electronic dictionaries, (cf.', '4 To be sure, it is not always true that a hanzi represents a syllable or that it represents a morpheme.', 'For.', ""example, in Northern Mandarin dialects there is a morpheme -r that attaches mostly to nouns, and which is phonologically incorporated into the syllable to which it attaches: thus men2+r (door+R) 'door' is realized as mer2."", 'This is orthographically represented as 7C.', ""so that 'door' would be and in this case the hanzi 7C, does not represent a syllable."", ""Similarly, there is no compelling evidence that either of the syllables of f.ifflll binllang2 'betelnut' represents a morpheme, since neither can occur in any context without the other: more likely fjfflll binllang2 is a disyllabic morpheme."", '(See Sproat and Shih 1995.)', 'However, the characterization given in the main body of the text is correct sufficiently often to be useful.', 'Church and Hanks [1989]), and we have used lists of character pairs ranked by mutual information to expand our own dictionary.', 'Nonstochastic lexical-knowledge-based approaches have been much more numerÂ\xad ous.', 'Two issues distinguish the various proposals.', 'The first concerns how to deal with ambiguities in segmentation.', 'The second concerns the methods used (if any) to exÂ\xad tend the lexicon beyond the static list of entries provided by the machine-readable dictionary upon which it is based.', 'The most popular approach to dealing with segÂ\xad mentation ambiguities is the maximum matching method, possibly augmented with further heuristics.', 'This method, one instance of which we term the ""greedy algorithm"" in our evaluation of our own system in Section 5, involves starting at the beginning (or end) of the sentence, finding the longest word starting (ending) at that point, and then repeating the process starting at the next (previous) hanzi until the end (beginÂ\xad ning) of the sentence is reached.', 'Papers that use this method or minor variants thereof include Liang (1986), Li et al.', '(1991}, Gu and Mao (1994), and Nie, Jin, and Hannan (1994).', 'The simplest version of the maximum matching algorithm effectively deals with ambiguity by ignoring it, since the method is guaranteed to produce only one segmentation.', 'Methods that allow multiple segmentations must provide criteria for choosing the best segmentation.', 'Some approaches depend upon some form of conÂ\xad straint satisfaction based on syntactic or semantic features (e.g., Yeh and Lee [1991], which uses a unification-based approach).', 'Others depend upon various lexical heurisÂ\xad tics: for example Chen and Liu (1992) attempt to balance the length of words in a three-word window, favoring segmentations that give approximately equal length for each word.', 'Methods for expanding the dictionary include, of course, morphological rules, rules for segmenting personal names, as well as numeral sequences, expressions for dates, and so forth (Chen and Liu 1992; Wang, Li, and Chang 1992; Chang and Chen 1993; Nie, Jin, and Hannan 1994).', 'Lexical-knowledge-based approaches that include statistical information generally presume that one starts with all possible segmentations of a sentence, and picks the best segmentation from the set of possible segmentations using a probabilistic or costÂ\xad based scoring mechanism.', 'Approaches differ in the algorithms used for scoring and selecting the best path, as well as in the amount of contextual information used in the scoring process.', 'The simplest approach involves scoring the various analyses by costs based on word frequency, and picking the lowest cost path; variants of this approach have been described in Chang, Chen, and Chen (1991) and Chang and Chen (1993).', 'More complex approaches such as the relaxation technique have been applied to this problem Fan and Tsai (1988}.', 'Note that Chang, Chen, and Chen (1991), in addition to word-frequency information, include a constraint-satisfication model, so their method is really a hybrid approach.', 'Several papers report the use of part-of-speech information to rank segmentations (Lin, Chiang, and Su 1993; Peng and Chang 1993; Chang and Chen 1993); typically, the probability of a segmentation is multiplied by the probability of the tagging(s) for that segmentation to yield an estimate of the total probability for the analysis.', 'Statistical methods seem particularly applicable to the problem of unknown-word identification, especially for constructions like names, where the linguistic constraints are minimal, and where one therefore wants to know not only that a particular seÂ\xad quence of hanzi might be a name, but that it is likely to be a name with some probabilÂ\xad ity.', 'Several systems propose statistical methods for handling unknown words (Chang et al. 1992; Lin, Chiang, and Su 1993; Peng and Chang 1993).', 'Some of these approaches (e.g., Lin, Chiang, and Su [1993]) attempt to identify unknown words, but do not acÂ\xad tually tag the words as belonging to one or another class of expression.', 'This is not ideal for some applications, however.', 'For instance, for TTS it is necessary to know that a particular sequence of hanzi is of a particular category because that knowlÂ\xad edge could affect the pronunciation; consider, for example the issues surrounding the pronunciation of ganl I qian2 discussed in Section 1.', 'Following Sproat and Shih (1990), performance for Chinese segmentation systems is generally reported in terms of the dual measures of precision and recalP It is fairly standard to report precision and recall scores in the mid to high 90% range.', 'However, it is almost universally the case that no clear definition of what constitutes a ""correct"" segmentation is given, so these performance measures are hard to evaluate.', 'Indeed, as we shall show in Section 5, even human judges differ when presented with the task of segmenting a text into words, so a definition of the criteria used to determine that a given segmentation is correct is crucial before one can interpret such measures.', 'In a few cases, the criteria for correctness are made more explicit.', 'For example Chen and Liu (1992) report precision and recall rates of over 99%, but this counts only the words that occur in the test corpus that also occur in their dictionary.', 'Besides the lack of a clear definition of what constitutes a correct segmentation for a given Chinese sentence, there is the more general issue that the test corpora used in these evaluations differ from system to system, so meaningful comparison between systems is rendered even more difficult.', 'The major problem for all segmentation systems remains the coverage afforded by the dictionary and the lexical rules used to augment the dictionary to deal with unseen words.', 'The dictionary sizes reported in the literature range from 17,000 to 125,000 entries, and it seems reasonable to assume that the coverage of the base dictionary constitutes a major factor in the performance of the various approaches, possibly more important than the particular set of methods used in the segmentation.', 'Furthermore, even the size of the dictionary per se is less important than the appropriateness of the lexicon to a particular test corpus: as Fung and Wu (1994) have shown, one can obtain substantially better segmentation by tailoring the lexicon to the corpus to be segmented.', 'Chinese word segmentation can be viewed as a stochastic transduction problem.', 'More formally, we start by representing the dictionary D as a Weighted Finite State TransÂ\xad ducer (WFST) (Pereira, Riley, and Sproat 1994).', 'Let H be the set of hanzi, p be the set of pinyin syllables with tone marks, and P be the set of grammatical part-of-speech labels.', 'Then each arc of D maps either from an element of H to an element of p, or from E-i.e., the empty string-to an element of P. More specifically, each word is represented in the dictionary as a sequence of arcs, starting from the initial state of D and labeled with an element 5 of Hxp, which is terminated with a weighted arc labeled with an element of Ex P. The weight represents the estimated cost (negative log probability) of the word.', 'Next, we represent the input sentence as an unweighted finite-state acceptor (FSA) I over H. Let us assume the existence of a function Id, which takes as input an FSA A, and produces as output a transducer that maps all and only the strings of symbols accepted by A to themselves (Kaplan and Kay 1994).', 'We can 5 Recall that precision is defined to be the number of correct hits divided by the total number of items.', 'selected; and that recall is defined to be the number of correct hits divided by the number of items that should have been selected.', 'then define the best segmentation to be the cheapest or best path in Id(I) o D* (i.e., Id(I) composed with the transitive closure of 0).6 Consider the abstract example illustrated in Figure 2.', 'In this example there are four ""input characters,"" A, B, C and D, and these map respectively to four ""pronunciations"" a, b, c and d. Furthermore, there are four ""words"" represented in the dictionary.', 'These are shown, with their associated costs, as follows: ABj nc 4.0 AB C/jj 6.0 CD /vb 5.', '0 D/ nc 5.0 The minimal dictionary encoding this information is represented by the WFST in Figure 2(a).', 'An input ABCD can be represented as an FSA as shown in Figure 2(b).', 'This FSA I can be segmented into words by composing Id(I) with D*, to form the WFST shown in Figure 2(c), then selecting the best path through this WFST to produce the WFST in Figure 2(d).', 'This WFST represents the segmentation of the text into the words AB and CD, word boundaries being marked by arcs mapping between f and part-of-speech labels.', 'Since the segmentation corresponds to the sequence of words that has the lowest summed unigram cost, the segmenter under discussion here is a zeroth-order model.', 'It is important to bear in mind, though, that this is not an inherent limitation of the model.', 'For example, it is well-known that one can build a finite-state bigram (word) model by simply assigning a state Si to each word Wi in the vocabulary, and having (word) arcs leaving that state weighted such that for each Wj and corresponding arc aj leaving Si, the cost on aj is the bigram cost of WiWj- (Costs for unseen bigrams in such a scheme would typically be modeled with a special backoff state.)', 'In Section 6 we disÂ\xad cuss other issues relating to how higher-order language models could be incorporated into the model.', '4.1 Dictionary Representation.', 'As we have seen, the lexicon of basic words and stems is represented as a WFST; most arcs in this WFST represent mappings between hanzi and pronunciations, and are costless.', 'Each word is terminated by an arc that represents the transduction between f and the part of speech of that word, weighted with an estimated cost for that word.', 'The cost is computed as follows, where N is the corpus size and f is the frequency: (1) Besides actual words from the base dictionary, the lexicon contains all hanzi in the Big 5 Chinese code/ with their pronunciation(s), plus entries for other characters that can be found in Chinese text, such as Roman letters, numerals, and special symbols.', 'Note that hanzi that are not grouped into dictionary words (and are not identified as singleÂ\xad hanzi words), or into one of the other categories of words discussed in this paper, are left unattached and tagged as unknown words.', 'Other strategies could readily 6 As a reviewer has pointed out, it should be made clear that the function for computing the best path is. an instance of the Viterbi algorithm.', '7 Big 5 is the most popular Chinese character coding standard in use in Taiwan and Hong Kong.', 'It is. based on the traditional character set rather than the simplified character set used in Singapore and Mainland China.', '(a) IDictionary D I D:d/0.000 B:b/0.000 B:b/0.000 ( b ) ( c ) ( d ) I B e s t P a t h ( I d ( I ) o D * ) I cps:nd4.!l(l() Figure 2 An abstract example illustrating the segmentation algorithm.', 'The transitive closure of the dictionary in (a) is composed with Id(input) (b) to form the WFST (c).', 'The segmentation chosen is the best path through the WFST, shown in (d).', '(In this figure eps is c) be implemented, though, such as a maximal-grouping strategy (as suggested by one reviewer of this paper); or a pairwise-grouping strategy, whereby long sequences of unattached hanzi are grouped into two-hanzi words (which may have some prosodic motivation).', 'We have not to date explored these various options.', 'Word frequencies are estimated by a re-estimation procedure that involves applyÂ\xad ing the segmentation algorithm presented here to a corpus of 20 million words,8 using 8 Our training corpus was drawn from a larger corpus of mixed-genre text consisting mostly of.', 'newspaper material, but also including kungfu fiction, Buddhist tracts, and scientific material.', 'This larger corpus was kindly provided to us by United Informatics Inc., R.O.C. a set of initial estimates of the word frequencies.9 In this re-estimation procedure only the entries in the base dictionary were used: in other words, derived words not in the base dictionary and personal and foreign names were not used.', 'The best analysis of the corpus is taken to be the true analysis, the frequencies are re-estimated, and the algorithm is repeated until it converges.', 'Clearly this is not the only way to estimate word-frequencies, however, and one could consider applying other methods: in particÂ\xad ular since the problem is similar to the problem of assigning part-of-speech tags to an untagged corpus given a lexicon and some initial estimate of the a priori probabilities for the tags, one might consider a more sophisticated approach such as that described in Kupiec (1992); one could also use methods that depend on a small hand-tagged seed corpus, as suggested by one reviewer.', 'In any event, to date, we have not compared different methods for deriving the set of initial frequency estimates.', 'Note also that the costs currently used in the system are actually string costs, rather than word costs.', ""This is because our corpus is not annotated, and hence does not distinguish between the various words represented by homographs, such as, which could be /adv jiangl 'be about to' orInc jiang4 '(military) general'-as in 1j\\xiao3jiang4 'little general.'"", 'In such cases we assign all of the estimated probability mass to the form with the most likely pronunciation (determined by inspection), and assign a very small probability (a very high cost, arbitrarily chosen to be 40) to all other variants.', 'In the case of, the most common usage is as an adverb with the pronunciation jiangl, so that variant is assigned the estimated cost of 5.98, and a high cost is assigned to nominal usage with the pronunciation jiang4.', 'The less favored reading may be selected in certain contexts, however; in the case of , for example, the nominal reading jiang4 will be selected if there is morphological information, such as a following plural affix ir, menD that renders the nominal reading likely, as we shall see in Section 4.3.', ""Figure 3 shows a small fragment of the WFST encoding the dictionary, containing both entries forjust discussed, g:tÂ¥ zhonglhua2 min2guo2 (China Republic) 'Republic of China,' and iÂ¥inl."", ""nan2gual 'pumpkin.'"", ""4.2 A Sample Segmentation Using Only Dictionary Words Figure 4 shows two possible paths from the lattice of possible analyses of the input sentence B X:Â¥ .:.S:P:l 'How do you say octopus in Japanese?' previously shown in Figure 1."", ""As noted, this sentence consists of four words, namely B X ri4wen2 'Japanese,' :Â¥, zhanglyu2 'octopus/ :&P:l zen3me0 'how,' and IDt shuol 'say.'"", ""As indicated in Figure 1(c), apart from this correct analysis, there is also the analysis taking B ri4 as a word (e.g., a common abbreviation for Japan), along with X:Â¥ wen2zhangl 'essay/ and f!!."", ""yu2 'fish.'"", 'Both of these analyses are shown in Figure 4; fortunately, the correct analysis is also the one with the lowest cost, so it is this analysis that is chosen.', '4.3 Morphological Analysis.', 'The method just described segments dictionary words, but as noted in Section 1, there are several classes of words that should be handled that are not found in a standard dictionary.', 'One class comprises words derived by productive morphologiÂ\xad cal processes, such as plural noun formation using the suffix ir, menD.', '(Other classes handled by the current system are discussed in Section 5.)', 'The morphological analÂ\xadysis itself can be handled using well-known techniques from finite-state morphol 9 The initial estimates are derived from the frequencies in the corpus of the strings of hanzi making up.', 'each word in the lexicon whether or not each string is actually an instance of the word in question.', '£ : _ADV: 5.88 If:!', "":zhong1 : 0.0 tjl :huo2 :0.0 (R:spub:/ic of Ch:ina) + .,_,...I : jlong4 :0.0 (mUifaty genG181) 0 £: _NC: 40.0 Figure 3 Partial Chinese Lexicon (NC = noun; NP = proper noun).c=- - I â\x80¢=- :il: .;ss:;zhangt â\x80¢ '-:."", 'I â\x80¢ JAPANS :rl4 .·········""\\)··········""o·\'·······""\\:J········· ·········\'\\; . \'.:: ..........0 6.51 9.51 : jj / JAPANESE OCTOPUS 10·28i£ :_nc HOW SAY f B :rl4 :il: :wen2 t \'- â\x80¢ :zhang!', '!!:\\ :yu2 e:_nc [::!!:zen3 l!f :moO t:_adv il!:shuot ,:_vb i i i 1 â\x80¢ 10.03 13...', '7.96 5.55 1 l...................................................................................................................................................................................................J..', ""Figure 4 Input lattice (top) and two segmentations (bottom) of the sentence 'How do you say octopus in Japanese?'."", 'A non-optimal analysis is shown with dotted lines in the bottom frame.', 'ogy (Koskenniemi 1983; Antworth 1990; Tzoukermann and Liberman 1990; Karttunen, Kaplan, and Zaenen 1992; Sproat 1992); we represent the fact that ir, attaches to nouns by allowing t:-transitions from the final states of all noun entries, to the initial state of the sub-WFST representing f,.', 'However, for our purposes it is not sufficient to repreÂ\xad sent the morphological decomposition of, say, plural nouns: we also need an estimate of the cost of the resulting word.', 'For derived words that occur in our corpus we can estimate these costs as we would the costs for an underived dictionary entry.', ""So, 1: f, xue2shengl+men0 (student+PL) 'students' occurs and we estimate its cost at 11.43; similarly we estimate the cost of f, jiang4+men0 (general+PL) 'generals' (as in 'J' f, xiao3jiang4+men0 'little generals'), at 15.02."", ""But we also need an estimate of the probability for a non-occurring though possible plural form like iÂ¥JJ1l.f, nan2gua1-men0 'pumpkins.'"", '10 Here we use the Good-Turing estimate (Baayen 1989; Church and Gale 1991), whereby the aggregate probability of previously unseen instances of a construction is estimated as ni/N, where N is the total number of observed tokens and n1 is the number of types observed only once.', 'Let us notate the set of previously unseen, or novel, members of a category X as unseen(X); thus, novel members of the set of words derived in f, menO will be deÂ\xad noted unseen(f,).', 'For irt the Good-Turing estimate just discussed gives us an estimate of p(unseen(f,) I f,)-the probability of observing a previously unseen instance of a construction in ft given that we know that we have a construction in f,.', 'This GoodÂ\xad Turing estimate of p(unseen(f,) If,) can then be used in the normal way to define the probability of finding a novel instance of a construction in ir, in a text: p(unseen(f,)) = p(unseen(f,) I f,) p(fn Here p(ir,) is just the probability of any construction in ft as estimated from the frequency of such constructions in the corpus.', 'Finally, asÂ\xad suming a simple bigram backoff model, we can derive the probability estimate for the particular unseen word iÂ¥1J1l.', 'irL as the product of the probability estimate for iÂ¥JJ1l., and the probability estimate just derived for unseen plurals in ir,: p(iÂ¥1J1l.ir,) p(iÂ¥1J1l.)p(unseen(f,)).', 'The cost estimate, cost(iÂ¥JJ1l.fn is computed in the obvious way by summing the negative log probabilities of iÂ¥JJ1l.', 'and f,.', 'Figure 5 shows how this model is implemented as part of the dictionary WFST.', 'There is a (costless) transition between the NC node and f,.', 'The transition from f, to a final state transduces c to the grammatical tag PL with cost cost(unseen(f,)): cost(iÂ¥JJ1l.ir,) == cost(iÂ¥JJ1l.)', '+ cost(unseen(fm, as desired.', ""For the seen word ir, 'genÂ\xad erals,' there is an c:NC transduction from to the node preceding ir,; this arc has cost cost( f,) - cost(unseen(f,)), so that the cost of the whole path is the desired cost( f,)."", 'This representation gives ir, an appropriate morphological decomposition, preÂ\xad serving information that would be lost by simply listing ir, as an unanalyzed form.', 'Note that the backoff model assumes that there is a positive correlation between the frequency of a singular noun and its plural.', 'An analysis of nouns that occur in both the singular and the plural in our database reveals that there is indeed a slight but significant positive correlation-R2 = 0.20, p < 0.005; see Figure 6.', 'This suggests that the backoff model is as reasonable a model as we can use in the absence of further information about the expected cost of a plural form.', '10 Chinese speakers may object to this form, since the suffix f, menD (PL) is usually restricted to.', 'attaching to terms denoting human beings.', ""However, it is possible to personify any noun, so in children's stories or fables, iÂ¥JJ1l."", ""f, nan2gual+men0 'pumpkins' is by no means impossible."", 'J:j:l :zhongl :0.0 ;m,Jlong4 :0.0 (mHHaryg9tltHBI) £: _ADV: 5.98 Â¥ :hua2:o.o E :_NC: 4.41 :mln2:o.o mm : guo2 : 0.0 (RopubllcofChlna) .....,.', '0 Figure 5 An example of affixation: the plural affix.', '4.4 Chinese Personal Names.', 'Full Chinese personal names are in one respect simple: they are always of the form family+given.', 'The family name set is restricted: there are a few hundred single-hanzi family names, and about ten double-hanzi ones.', 'Given names are most commonly two hanzi long, occasionally one hanzi long: there are thus four possible name types, which can be described by a simple set of context-free rewrite rules such as the following: 1.', 'wo rd => na m e 2.', 'na me =>1 ha nzi fa mi ly 2 ha nzi gi ve n 3.', 'na me =>1 ha nzi fa mi ly 1 ha nzi gi ve n 4.', 'na me =>2 ha nzi fa mi ly 2 ha nzi gi ve n 5.', 'na me =>2 ha nzi fa mi ly 1 ha nzi gi ve n 6.1 ha nzi fa mi ly => ha nz ii 7.2 ha nzi fa mi ly => ha nzi i ha nz ij 8.1 ha nzi gi ve n => ha nz ii 9.2 ha nzi giv en => ha nzi i ha nz ij The difficulty is that given names can consist, in principle, of any hanzi or pair of hanzi, so the possible given names are limited only by the total number of hanzi, though some hanzi are certainly far more likely than others.', 'For a sequence of hanzi that is a possible name, we wish to assign a probability to that sequence qua name.', 'We can model this probability straightforwardly enough with a probabilistic version of the grammar just given, which would assign probabilities to the individual rules.', 'For example, given a sequence F1G1G2, where F1 is a legal single-hanzi family name, and Plural Nouns X g 0 g ""\' X X 0 T!i c""\'.', '0 X u} ""\' o; .2 X X>O!KXX XI<>< »C X X XX :X: X X ""\' X X XX >OODIIC:liiC:oiiiiCI--8!X:liiOC!I!S8K X X X 10 100 1000 10000 log(F)_base: R""2=0.20 (p < 0.005) X 100000 Figure 6 Plot of log frequency of base noun, against log frequency of plural nouns.', 'G1 and G2 are hanzi, we can estimate the probability of the sequence being a name as the product of: â\x80¢ the probability that a word chosen randomly from a text will be a name-p(rule 1), and â\x80¢ the probability that the name is of the form 1hanzi-family 2hanzi-given-p(rule 2), and â\x80¢ the probability that the family name is the particular hanzi F1-p(rule 6), and â\x80¢ the probability that the given name consists of the particular hanzi G1 and G2-p(rule 9) This model is essentially the one proposed in Chang et al.', '(1992).', ""The first probability is estimated from a name count in a text database, and the rest of the probabilities are estimated from a large list of personal names.n Note that in Chang et al.'s model the p(rule 9) is estimated as the product of the probability of finding G 1 in the first position of a two-hanzi given name and the probability of finding G2 in the second position of a two-hanzi given name, and we use essentially the same estimate here, with some modifications as described later on."", 'This model is easily incorporated into the segmenter by building a WFST restrictÂ\xad ing the names to the four licit types, with costs on the arcs for any particular name summing to an estimate of the cost of that name.', ""This WFST is then summed with the WFST implementing the dictionary and morphological rules, and the transitive closure of the resulting transducer is computed; see Pereira, Riley, and Sproat (1994) for an explanation of the notion of summing WFSTs.12 Conceptual Improvements over Chang et al.'s Model."", ""There are two weaknesses in Chang et al.'s model, which we improve upon."", 'First, the model assumes independence between the first and second hanzi of a double given name.', ""Yet, some hanzi are far more probable in women's names than they are in men's names, and there is a similar list of male-oriented hanzi: mixing hanzi from these two lists is generally less likely than would be predicted by the independence model."", 'As a partial solution, for pairs of hanzi that co-occur sufficiently often in our namelists, we use the estimated bigram cost, rather than the independence-based cost.', 'The second weakness is purely conceptual, and probably does not affect the perÂ\xad formance of the model.', 'For previously unseen hanzi in given names, Chang et al. assign a uniform small cost; but we know that some unseen hanzi are merely acciÂ\xad dentally missing, whereas others are missing for a reason-for example, because they have a bad connotation.', 'As we have noted in Section 2, the general semantic class to which a hanzi belongs is often predictable from its semantic radical.', 'Not surprisingly some semantic classes are better for names than others: in our corpora, many names are picked from the GRASS class but very few from the SICKNESS class.', 'Other good classes include JADE and GOLD; other bad classes are DEATH and RAT.', 'We can better predict the probability of an unseen hanzi occurring in a name by computing a within-class Good-Turing estimate for each radical class.', ""Assuming unseen objects within each class are equiprobable, their probabilities are given by the Good-Turing theorem as: cis E( n'J.ls) Po oc N * E(N8ls) (2) where p815 is the probability of one unseen hanzi in class cls, E(n'J.15 ) is the expected number of hanzi in cls seen once, N is the total number of hanzi, and E(N(/ 5 ) is the expected number of unseen hanzi in class cls."", 'The use of the Good-Turing equation presumes suitable estimates of the unknown expectations it requires.', 'In the denomi 11 We have two such lists, one containing about 17,000 full names, and another containing frequencies of.', 'hanzi in the various name positions, derived from a million names.', ""12 One class of full personal names that this characterization does not cover are married women's names."", ""where the husband's family name is optionally prepended to the woman's full name; thus ;f:*lf#i xu3lin2-yan2hai3 would represent the name that Ms. Lin Yanhai would take if she married someone named Xu."", 'This style of naming is never required and seems to be losing currency.', 'It is formally straightforward to extend the grammar to include these names, though it does increase the likelihood of overgeneration and we are unaware of any working systems that incorporate this type of name.', 'We of course also fail to identify, by the methods just described, given names used without their associated family name.', 'This is in general very difficult, given the extremely free manner in which Chinese given names are formed, and given that in these cases we lack even a family name to give the model confidence that it is identifying a name.', 'Table 1 The cost as a novel given name (second position) for hanzi from various radical classes.', 'JA DE G O L D G R AS S SI C K NE SS DE AT H R A T 14.', '98 15.', '52 15.', '76 16.', '25 16.', '30 16.', '42 nator, the N31s can be measured well by counting, and we replace the expectation by the observation.', 'In the numerator, however, the counts of ni1s are quite irregular, inÂ\xad cluding several zeros (e.g., RAT, none of whose members were seen).', 'However, there is a strong relationship between ni1s and the number of hanzi in the class.', 'For E(ni1s), then, we substitute a smooth S against the number of class elements.', 'This smooth guarantees that there are no zeroes estimated.', 'The final estimating equation is then: (3) Since the total of all these class estimates was about 10% off from the Turing estimate n1/N for the probability of all unseen hanzi, we renormalized the estimates so that they would sum to n 1jN.', 'This class-based model gives reasonable results: for six radical classes, Table 1 gives the estimated cost for an unseen hanzi in the class occurring as the second hanzi in a double GIVEN name.', 'Note that the good classes JADE, GOLD and GRASS have lower costs than the bad classes SICKNESS, DEATH and RAT, as desired, so the trend observed for the results of this method is in the right direction.', '4.5 Transliterations of Foreign Words.', 'Foreign names are usually transliterated using hanzi whose sequential pronunciation mimics the source language pronunciation of the name.', 'Since foreign names can be of any length, and since their original pronunciation is effectively unlimited, the identiÂ\xad fication of such names is tricky.', ""Fortunately, there are only a few hundred hanzi that are particularly common in transliterations; indeed, the commonest ones, such as E. bal, m er3, and iij al are often clear indicators that a sequence of hanzi containing them is foreign: even a name like !:i*m xia4mi3-er3 'Shamir,' which is a legal ChiÂ\xad nese personal name, retains a foreign flavor because of liM."", 'As a first step towards modeling transliterated names, we have collected all hanzi occurring more than once in the roughly 750 foreign names in our dictionary, and we estimate the probabilÂ\xad ity of occurrence of each hanzi in a transliteration (pTN(hanzi;)) using the maximum likelihood estimate.', 'As with personal names, we also derive an estimate from text of the probability of finding a transliterated name of any kind (PTN).', 'Finally, we model the probability of a new transliterated name as the product of PTN and PTN(hanzi;) for each hanzi; in the putative name.13 The foreign name model is implemented as an WFST, which is then summed with the WFST implementing the dictionary, morpho 13 The current model is too simplistic in several respects.', 'For instance, the common ""suffixes,"" -nia (e.g.,.', 'Virginia) and -sia are normally transliterated as fbSi!', 'ni2ya3 and @5:2 xilya3, respectively.', 'The interdependence between fb or 1/!i, and 5:2 is not captured by our model, but this could easily be remedied.', 'logical rules, and personal names; the transitive closure of the resulting machine is then computed.', 'In this section we present a partial evaluation of the current system, in three parts.', ""The first is an evaluation of the system's ability to mimic humans at the task of segmenting text into word-sized units; the second evaluates the proper-name identification; the third measures the performance on morphological analysis."", 'To date we have not done a separate evaluation of foreign-name recognition.', 'Evaluation of the Segmentation as a Whole.', 'Previous reports on Chinese segmentation have invariably cited performance either in terms of a single percent-correct score, or else a single precision-recall pair.', 'The problem with these styles of evaluation is that, as we shall demonstrate, even human judges do not agree perfectly on how to segment a given text.', 'Thus, rather than give a single evaluative score, we prefer to compare the performance of our method with the judgments of several human subjects.', 'To this end, we picked 100 sentences at random containing 4,372 total hanzi from a test corpus.14 (There were 487 marks of punctuation in the test sentences, including the sentence-final periods, meaning that the average inter-punctuation distance was about 9 hanzi.)', 'We asked six native speakers-three from Taiwan (TlT3), and three from the Mainland (M1M3)-to segment the corpus.', 'Since we could not bias the subjects towards a particular segmentation and did not presume linguistic sophistication on their part, the instructions were simple: subjects were to mark all places they might plausibly pause if they were reading the text aloud.', ""An examination of the subjects' bracketings confirmed that these instructions were satisfactory in yielding plausible word-sized units."", '(See also Wu and Fung [1994].)', 'Various segmentation approaches were then compared with human performance: 1.', 'A greedy algorithm (or maximum-matching algorithm), GR: proceed through the sentence, taking the longest match with a dictionary entry at each point.', '2.', 'An anti-greedy algorithm, AG: instead of the longest match, take the.', 'shortest match at each point.', '3.', 'The method being described-henceforth ST..', 'Two measures that can be used to compare judgments are: 1.', 'Precision.', 'For each pair of judges consider one judge as the standard,.', ""computing the precision of the other's judgments relative to this standard."", '2.', 'Recall.', 'For each pair of judges, consider one judge as the standard,.', ""computing the recall of the other's judgments relative to this standard."", 'Clearly, for judges h and h taking h as standard and computing the precision and recall for Jz yields the same results as taking h as the standard, and computing for h, 14 All evaluation materials, with the exception of those used for evaluating personal names were drawn.', 'from the subset of the United Informatics corpus not used in the training of the models.', 'Table 2 Similarity matrix for segmentation judgments.', 'Jud ges A G G R ST M 1 M 2 M 3 T1 T2 T3 AG 0.7 0 0.7 0 0 . 4 3 0.4 2 0.6 0 0.6 0 0.6 2 0.5 9 GR 0.9 9 0 . 6 2 0.6 4 0.7 9 0.8 2 0.8 1 0.7 2 ST 0 . 6 4 0.6 7 0.8 0 0.8 4 0.8 2 0.7 4 M1 0.7 7 0.6 9 0.7 1 0.6 9 0.7 0 M2 0.7 2 0.7 3 0.7 1 0.7 0 M3 0.8 9 0.8 7 0.8 0 T1 0.8 8 0.8 2 T2 0.7 8 respectively, the recall and precision.', 'We therefore used the arithmetic mean of each interjudge precision-recall pair as a single measure of interjudge similarity.', 'Table 2 shows these similarity measures.', 'The average agreement among the human judges is .76, and the average agreement between ST and the humans is .75, or about 99% of the interhuman agreement.15 One can better visualize the precision-recall similarity matrix by producing from that matrix a distance matrix, computing a classical metric multidimensional scaling (Torgerson 1958; Becker, Chambers, Wilks 1988) on that disÂ\xad tance matrix, and plotting the first two most significant dimensions.', 'The result of this is shown in Figure 7.', 'The horizontal axis in this plot represents the most significant dimension, which explains 62% of the variation.', 'In addition to the automatic methods, AG, GR, and ST, just discussed, we also added to the plot the values for the current algorithm using only dictionary entries (i.e., no productively derived words or names).', 'This is to allow for fair comparison between the statistical method and GR, which is also purely dictionary-based.', 'As can be seen, GR and this ""pared-down"" statistical method perform quite similarly, though the statistical method is still slightly better.16 AG clearly performs much less like humans than these methods, whereas the full statistical algorithm, including morphological derivatives and names, performs most closely to humans among the automatic methods.', 'It can also be seen clearly in this plot that two of the Taiwan speakers cluster very closely together, and the third TaiÂ\xad wan speaker is also close in the most significant dimension (the x axis).', 'Two of the Mainlanders also cluster close together but, interestingly, not particularly close to the Taiwan speakers; the third Mainlander is much more similar to the Taiwan speakers.', 'The breakdown of the different types of words found by ST in the test corpus is given in Table 3.', 'Clearly the percentage of productively formed words is quite small (for this particular corpus), meaning that dictionary entries are covering most of the 15 GR is .73 or 96%..', '16 As one reviewer points out, one problem with the unigram model chosen here is that there is still a. tendency to pick a segmentation containing fewer words.', 'That is, given a choice between segmenting a sequence abc into abc and ab, c, the former will always be picked so long as its cost does not exceed the summed costs of ab and c: while; it is possible for abc to be so costly as to preclude the larger grouping, this will certainly not usually be the case.', 'In this way, the method reported on here will necessarily be similar to a greedy method, though of course not identical.', 'As the reviewer also points out, this is a problem that is shared by, e.g., probabilistic context-free parsers, which tend to pick trees with fewer nodes.', 'The question is how to normalize the probabilities in such a way that smaller groupings have a better shot at winning.', 'This is an issue that we have not addressed at the current stage of our research.', 'i..f,..', '""c\' 0 + 0 ""0 \' â\x80¢ + a n t i g r e e d y x g r e e d y < > c u r r e n t m e t h o d o d i e t . o n l y â\x80¢ Taiwan 0 ·;; 0 c CD E i5 0""\' 9 9 â\x80¢ Mainland â\x80¢ â\x80¢ â\x80¢ â\x80¢ -0.30.20.1 0.0 0.1 0.2 Dimension 1 (62%) Figure 7 Classical metric multidimensional scaling of distance matrix, showing the two most significant dimensions.', 'The percentage scores on the axis labels represent the amount of variation in the data explained by the dimension in question.', 'Table 3 Classes of words found by ST for the test corpus.', 'Word type N % Dic tion ary entr ies 2 , 5 4 3 9 7 . 4 7 Mor pho logi call y deri ved wor ds 3 0 . 1 1 Fore ign tran slite rati ons 9 0 . 3 4 Per son al na mes 5 4 2 . 0 7 cases.', 'Nonetheless, the results of the comparison with human judges demonstrates that there is mileage being gained by incorporating models of these types of words.', 'It may seem surprising to some readers that the interhuman agreement scores reported here are so low.', 'However, this result is consistent with the results of exÂ\xad periments discussed in Wu and Fung (1994).', 'Wu and Fung introduce an evaluation method they call nk-blind.', 'Under this scheme, n human judges are asked independently to segment a text.', 'Their results are then compared with the results of an automatic segmenter.', 'For a given ""word"" in the automatic segmentation, if at least k of the huÂ\xad man judges agree that this is a word, then that word is considered to be correct.', 'For eight judges, ranging k between 1 and 8 corresponded to a precision score range of 90% to 30%, meaning that there were relatively few words (30% of those found by the automatic segmenter) on which all judges agreed, whereas most of the words found by the segmenter were such that one human judge agreed.', 'Proper-Name Identification.', 'To evaluate proper-name identification, we randomly seÂ\xad lected 186 sentences containing 12,000 hanzi from our test corpus and segmented the text automatically, tagging personal names; note that for names, there is always a sinÂ\xad gle unambiguous answer, unlike the more general question of which segmentation is correct.', 'The performance was 80.99% recall and 61.83% precision.', 'Interestingly, Chang et al. report 80.67% recall and 91.87% precision on an 11,000 word corpus: seemingly, our system finds as many names as their system, but with four times as many false hits.', ""However, we have reason to doubt Chang et al.'s performance claims."", 'Without using the same test corpus, direct comparison is obviously difficult; fortunately, Chang et al. include a list of about 60 sentence fragments that exemplify various categories of performance for their system.', 'The performance of our system on those sentences apÂ\xad peared rather better than theirs.', 'On a set of 11 sentence fragments-the A set-where they reported 100% recall and precision for name identification, we had 73% recall and 80% precision.', 'However, they list two sets, one consisting of 28 fragments and the other of 22 fragments, in which they had 0% recall and precision.', 'On the first of these-the B set-our system had 64% recall and 86% precision; on the second-the C set-it had 33% recall and 19% precision.', 'Note that it is in precision that our overÂ\xad all performance would appear to be poorer than the reported performance of Chang et al., yet based on their published examples, our system appears to be doing better precisionwise.', 'Thus we have some confidence that our own performance is at least as good as that of Chang et al.', '(1992).', ""In a more recent study than Chang et al., Wang, Li, and Chang (1992) propose a surname-driven, non-stochastic, rule-based system for identifying personal names.17 Wang, Li, and Chang also compare their performance with Chang et al.'s system."", 'Fortunately, we were able to obtain a copy of the full set of sentences from Chang et al. on which Wang, Li, and Chang tested their system, along with the output of their system.18 In what follows we will discuss all cases from this set where our performance on names differs from that of Wang, Li, and Chang.', 'Examples are given in Table 4.', 'In these examples, the names identified by the two systems (if any) are underlined; the sentence with the correct segmentation is boxed.19 The differences in performance between the two systems relate directly to three issues, which can be seen as differences in the tuning of the models, rather than repreÂ\xad senting differences in the capabilities of the model per se.', 'The first issue relates to the completeness of the base lexicon.', ""The Wang, Li, and Chang system fails on fragment (b) because their system lacks the word youlyoul 'soberly' and misinterpreted the thus isolated first youl as being the final hanzi of the preceding name; similarly our system failed in fragment (h) since it is missing the abbreviation i:lJI!"", ""tai2du2 'Taiwan Independence.'"", 'This is a rather important source of errors in name identifiÂ\xad cation, and it is not really possible to objectively evaluate a name recognition system without considering the main lexicon with which it is used.', ""17 They also provide a set of title-driven rules to identify names when they occur before titles such as $t. 1: xianlshengl 'Mr.' or i:l:itr!J tai2bei3 shi4zhang3 'Taipei Mayor.'"", 'Obviously, the presence of a title after a potential name N increases the probability that N is in fact a name.', 'Our system does not currently make use of titles, but it would be straightforward to do so within the finite-state framework that we propose.', '18 We are grateful to ChaoHuang Chang for providing us with this set.', ""Note that Wang, Li, and Chang's."", 'set was based on an earlier version of the Chang et a!.', 'paper, and is missing 6 examples from the A set.', ""19 We note that it is not always clear in Wang, Li, and Chang's examples which segmented words."", 'constitute names, since we have only their segmentation, not the actual classification of the segmented words.', 'Therefore in cases where the segmentation is identical between the two systems we assume that tagging is also identical.', 'Table 4 Differences in performance between our system and Wang, Li, and Chang (1992).', 'Our System Wang, Li, and Chang a. 1\\!f!IP Eflltii /1\\!f!J:P $1til I b. agm: I a m: c. 5 Bf is Bf 1 d. ""*:t: w _t ff 1 ""* :t: w_tff 1 g., , Transliteration/Translation chen2zhongl-shenl qu3 \'music by Chen Zhongshen \' huang2rong2 youlyoul de dao4 \'Huang Rong said soberly\' zhangl qun2 Zhang Qun xian4zhang3 you2qingl shang4ren2 hou4 \'after the county president You Qing had assumed the position\' lin2 quan2 \'Lin Quan\' wang2jian4 \'Wang Jian\' oulyang2-ke4 \'Ouyang Ke\' yinl qi2 bu4 ke2neng2 rong2xu3 tai2du2 er2 \'because it cannot permit Taiwan Independence so\' silfa3-yuan4zhang3 lin2yang2-gang3 \'president of the Judicial Yuan, Lin Yanggang\' lin2zhangl-hu2 jiangl zuo4 xian4chang3 jie3shuol \'Lin Zhanghu will give an exÂ\xad planation live\' jin4/iang3 nian2 nei4 sa3 xia4 de jinlqian2 hui4 ting2zhi3 \'in two years the distributed money will stop\' gaoltangl da4chi2 ye1zi0 fen3 \'chicken stock, a tablespoon of coconut flakes\' you2qingl ru4zhu3 xian4fu3 lwu4 \'after You Qing headed the county government\' Table 5 Performance on morphological analysis.', 'Affix Pron Base category N found N missed (recall) N correct (precision) t,-,7 The second issue is that rare family names can be responsible for overgeneration, especially if these names are otherwise common as single-hanzi words.', ""For example, the Wang, Li, and Chang system fails on the sequence 1:f:p:]nian2 nei4 sa3 in (k) since 1F nian2 is a possible, but rare, family name, which also happens to be written the same as the very common word meaning 'year.'"", 'Our system fails in (a) because of$ shenl, a rare family name; the system identifies it as a family name, whereas it should be analyzed as part of the given name.', 'Finally, the statistical method fails to correctly group hanzi in cases where the individual hanzi comprising the name are listed in the dictionary as being relatively high-frequency single-hanzi words.', 'An example is in (i), where the system fails to group t;,f;?""$?t!: lin2yang2gang3 as a name, because all three hanzi can in principle be separate words (t;,f; lin2 \'wood\';?""$ yang2 \'ocean\'; ?t!; gang3 \'harbor\').', 'In many cases these failures in recall would be fixed by having better estimates of the actual probÂ\xad abilities of single-hanzi words, since our estimates are often inflated.', ""A totally nonÂ\xad stochastic rule-based system such as Wang, Li, and Chang's will generally succeed in such cases, but of course runs the risk of overgeneration wherever the single-hanzi word is really intended."", 'Evaluation of Morphological Analysis.', 'In Table 5 we present results from small test corÂ\xad pora for the productive affixes handled by the current version of the system; as with names, the segmentation of morphologically derived words is generally either right or wrong.', ""The first four affixes are so-called resultative affixes: they denote some propÂ\xad erty of the resultant state of a verb, as in E7 wang4bu4-liao3 (forget-not-attain) 'cannot forget.'"", 'The last affix in the list is the nominal plural f, men0.20 In the table are the (typical) classes of words to which the affix attaches, the number found in the test corpus by the method, the number correct (with a precision measure), and the number missed (with a recall measure).', 'In this paper we have argued that Chinese word segmentation can be modeled efÂ\xad fectively using weighted finite-state transducers.', 'This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.', 'Other kinds of productive word classes, such as company names, abbreviations (termed fijsuolxie3 in Mandarin), and place names can easily be 20 Note that 7 in E 7 is normally pronounced as leO, but as part of a resultative it is liao3..', 'handled given appropriate models.', '(For some recent corpus-based work on Chinese abbreviations, see Huang, Ahrens, and Chen [1993].)', 'We have argued that the proposed method performs well.', 'However, some caveats are in order in comparing this method (or any method) with other approaches to segÂ\xad mentation reported in the literature.', 'First of all, most previous articles report perforÂ\xad mance in terms of a single percent-correct score, or else in terms of the paired measures of precision and recall.', 'What both of these approaches presume is that there is a sinÂ\xad gle correct segmentation for a sentence, against which an automatic algorithm can be compared.', 'We have shown that, at least given independent human judgments, this is not the case, and that therefore such simplistic measures should be mistrusted.', 'This is not to say that a set of standards by which a particular segmentation would count as correct and another incorrect could not be devised; indeed, such standards have been proposed and include the published PRCNSC (1994) and ROCLING (1993), as well as the unpublished Linguistic Data Consortium standards (ca.', 'May 1995).', 'However, until such standards are universally adopted in evaluating Chinese segmenters, claims about performance in terms of simple measures like percent correct should be taken with a grain of salt; see, again, Wu and Fung (1994) for further arguments supporting this conclusion.', 'Second, comparisons of different methods are not meaningful unless one can evalÂ\xad uate them on the same corpus.', 'Unfortunately, there is no standard corpus of Chinese texts, tagged with either single or multiple human judgments, with which one can compare performance of various methods.', 'One hopes that such a corpus will be forthÂ\xad coming.', 'Finally, we wish to reiterate an important point.', 'The major problem for our segÂ\xad menter, as for all segmenters, remains the problem of unknown words (see Fung and Wu [1994]).', 'We have provided methods for handling certain classes of unknown words, and models for other classes could be provided, as we have noted.', 'However, there will remain a large number of words that are not readily adduced to any producÂ\xad tive pattern and that would simply have to be added to the dictionary.', 'This implies, therefore, that a major factor in the performance of a Chinese segmenter is the quality of the base dictionary, and this is probably a more important factor-from the point of view of performance alone-than the particular computational methods used.', 'The method reported in this paper makes use solely of unigram probabilities, and is therefore a zeroeth-order model: the cost of a particular segmentation is estimated as the sum of the costs of the individual words in the segmentation.', 'However, as we have noted, nothing inherent in the approach precludes incorporating higher-order constraints, provided they can be effectively modeled within a finite-state framework.', 'For example, as Gan (1994) has noted, one can construct examples where the segmenÂ\xad tation is locally ambiguous but can be determined on the basis of sentential or even discourse context.', ""Two sets of examples from Gan are given in (1) and (2) (:::::: Gan's Appendix B, exx."", 'lla/llb and 14a/14b respectively).', 'In (1) the sequencema3lu4 cannot be resolved locally, but depends instead upon broader context; similarly in (2), the sequence :::tcai2neng2 cannot be resolved locally: 1.', ""(a) 1 § . ;m t 7 leO z h e 4 pil m a 3 lu 4 sh an g4 bi ng 4 t h i s CL (assi fier) horse w ay on sic k A SP (ec t) 'This horse got sick on the way' (b) 1§: . til y zhe4 tiao2 ma3lu4 hen3 shao3 this CL road very few 'Very few cars pass by this road' :$ chel jinglguo4 car pass by 2."", ""(a) I f f fi * fi :1 }'l ij 1§: {1M m m s h e n 3 m e 0 shi2 ho u4 wo 3 cai2 ne ng 2 ke4 fu 2 zh e4 ge 4 ku n4 w h a t ti m e I just be abl e ov er co m e thi s C L dif fic 'When will I be able to overcome this difficulty?'"", ""(b) 89 :1 t& tal de cai2neng2 hen3 he DE talent very 'He has great talent' f.b ga ol hig h While the current algorithm correctly handles the (b) sentences, it fails to handle the (a) sentences, since it does not have enough information to know not to group the sequences.ma3lu4 and?]cai2neng2 respectively."", ""Gan's solution depends upon a fairly sophisticated language model that attempts to find valid syntactic, semantic, and lexical relations between objects of various linguistic types (hanzi, words, phrases)."", 'An example of a fairly low-level relation is the affix relation, which holds between a stem morpheme and an affix morpheme, such as f1 -menD (PL).', 'A high-level relation is agent, which relates an animate nominal to a predicate.', 'Particular instances of relations are associated with goodness scores.', 'Particular relations are also consistent with particular hypotheses about the segmentation of a given sentence, and the scores for particular relations can be incremented or decremented depending upon whether the segmentations with which they are consistent are ""popular"" or not.', ""While Gan's system incorporates fairly sophisticated models of various linguistic information, it has the drawback that it has only been tested with a very small lexicon (a few hundred words) and on a very small test set (thirty sentences); there is therefore serious concern as to whether the methods that he discusses are scalable."", 'Another question that remains unanswered is to what extent the linguistic information he considers can be handled-or at least approximated-by finite-state language models, and therefore could be directly interfaced with the segmentation model that we have presented in this paper.', 'For the examples given in (1) and (2) this certainly seems possible.', 'Consider first the examples in (2).', ""The segmenter will give both analyses :1 cai2 neng2 'just be able,' and ?]cai2neng2 'talent,' but the latter analysis is preferred since splitting these two morphemes is generally more costly than grouping them."", ""In (2a), we want to split the two morphemes since the correct analysis is that we have the adverb :1 cai2 'just,' the modal verb neng2 'be able' and the main verb R: Hke4fu2 'overcome'; the competing analysis is, of course, that we have the noun :1 cai2neng2 'talent,' followed by }'lijke4fu2 'overcome.'"", 'Clearly it is possible to write a rule that states that if an analysis Modal+ Verb is available, then that is to be preferred over Noun+ Verb: such a rule could be stated in terms of (finite-state) local grammars in the sense of Mohri (1993).', ""Turning now to (1), we have the similar problem that splitting.into.ma3 'horse' andlu4 'way' is more costly than retaining this as one word .ma3lu4 'road.'"", ""However, there is again local grammatical information that should favor the split in the case of (1a): both .ma3 'horse' and .ma3 lu4 are nouns, but only .ma3 is consistent with the classifier pil, the classifier for horses.21 By a similar argument, the preference for not splitting , lm could be strengthened in (lb) by the observation that the classifier 1'1* tiao2 is consistent with long or winding objects like , lm ma3lu4 'road' but not with,ma3 'horse.'"", 'Note that the sets of possible classifiers for a given noun can easily be encoded on that noun by grammatical features, which can be referred to by finite-state grammatical rules.', ""Thus, we feel fairly confident that for the examples we have considered from Gan's study a solution can be incorporated, or at least approximated, within a finite-state framework."", 'With regard to purely morphological phenomena, certain processes are not hanÂ\xad dled elegantly within the current framework Any process involving reduplication, for instance, does not lend itself to modeling by finite-state techniques, since there is no way that finite-state networks can directly implement the copying operations required.', 'Mandarin exhibits several such processes, including A-not-A question formation, ilÂ\xad lustrated in (3a), and adverbial reduplication, illustrated in (3b): 3.', ""(a) ;IE shi4 'be' => ;IE;IE shi4bu2-shi4 (be-not-be) 'is it?'"", 'JI!', ""gaolxing4 'happy' => F.i'JF.i'J Jl!"", ""gaolbu4-gaolxing4 (hap-not-happy) 'happy?'"", ""(b) F.i'JJI!"", ""gaolxing4 'happy'=> F.i'JF.i'JJI!JI!"", ""gaolgaolxing4xing4 'happily' In the particular form of A-not-A reduplication illustrated in (3a), the first syllable of the verb is copied, and the negative markerbu4 'not' is inserted between the copy and the full verb."", 'In the case of adverbial reduplication illustrated in (3b) an adjective of the form AB is reduplicated as AABB.', 'The only way to handle such phenomena within the framework described here is simply to expand out the reduplicated forms beforehand, and incorporate the expanded forms into the lexical transducer.', 'Despite these limitations, a purely finite-state approach to Chinese word segmentation enjoys a number of strong advantages.', 'The model we use provides a simple framework in which to incorporate a wide variety of lexical information in a uniform way.', 'The use of weighted transducers in particular has the attractive property that the model, as it stands, can be straightforwardly interfaced to other modules of a larger speech or natural language system: presumably one does not want to segment Chinese text for its own sake but instead with a larger purpose in mind.', 'As described in Sproat (1995), the Chinese segmenter presented here fits directly into the context of a broader finite-state model of text analysis for speech synthesis.', 'Furthermore, by inverting the transducer so that it maps from phonemic transcriptions to hanzi sequences, one can apply the segmenter to other problems, such as speech recognition (Pereira, Riley, and Sproat 1994).', 'Since the transducers are built from human-readable descriptions using a lexical toolkit (Sproat 1995), the system is easily maintained and extended.', 'While size of the resulting transducers may seem daunting-the segmenter described here, as it is used in the Bell Labs Mandarin TTS system has about 32,000 states and 209,000 arcs-recent work on minimization of weighted machines and transducers (cf.', '21 In Chinese, numerals and demonstratives cannot modify nouns directly, and must be accompanied by.', 'a classifier.', 'The particular classifier used depends upon the noun.', 'Mohri [1995]) shows promise for improving this situation.', 'The model described here thus demonstrates great potential for use in widespread applications.', 'This flexibility, along with the simplicity of implementation and expansion, makes this framework an attractive base for continued research.', ""We thank United Informatics for providing us with our corpus of Chinese text, and BDC for the 'Behavior ChineseEnglish Electronic Dictionary.'"", 'We further thank Dr. J.-S.', 'Chang of Tsinghua University, Taiwan, R.O.C., for kindly providing us with the name corpora.', 'We also thank ChaoHuang Chang, reviewers for the 1994 ACL conference, and four anonymous reviewers for Computational Linguistics for useful comments.']",extractive -W04-0213,W04-0213,7,19,"Nevertheless, only a part of this corpus (10 texts), which the authors name ""core corpus"", is annotated with all this information.","There is a ‘core corpus’ of ten commentaries, for which the range of information (except for syntax) has been completed; the remaining data has been annotated to different degrees, as explained below.","['The Potsdam Commentary Corpus', 'A corpus of German newspaper commentaries has been assembled and annotated with different information (and currently, to different degrees): part-of-speech, syntax, rhetorical structure, connectives, co-reference, and information structure.', 'The paper explains the design decisions taken in the annotations, and describes a number of applications using this corpus with its multi-layer annotation.', 'A corpus of German newspaper commentaries has been assembled at Potsdam University, and annotated with different linguistic information, to different degrees.', 'Two aspects of the corpus have been presented in previous papers ((Re- itter, Stede 2003) on underspecified rhetorical structure; (Stede 2003) on the perspective of knowledge-based summarization).', 'This paper, however, provides a comprehensive overview of the data collection effort and its current state.', 'At present, the â\x80\x98Potsdam Commentary Corpusâ\x80\x99 (henceforth â\x80\x98PCCâ\x80\x99 for short) consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.', 'The choice of the genre commentary resulted from the fact that an investigation of rhetorical structure, its interaction with other aspects of discourse structure, and the prospects for its automatic derivation are the key motivations for building up the corpus.', 'Commentaries argue in favor of a specific point of view toward some political issue, often dicussing yet dismissing other points of view; therefore, they typically offer a more interesting rhetorical structure than, say, narrative text or other portions of newspapers.', 'The choice of the particular newspaper was motivated by the fact that the language used in a regional daily is somewhat simpler than that of papers read nationwide.', '(Again, the goal of also in structural features.', 'As an indication, in our core corpus, we found an average sentence length of 15.8 words and 1.8 verbs per sentence, whereas a randomly taken sample of ten commentaries from the national papers Su¨ddeutsche Zeitung and Frankfurter Allgemeine has 19.6 words and 2.1 verbs per sentence.', 'The commentaries in PCC are all of roughly the same length, ranging from 8 to 10 sentences.', 'For illustration, an English translation of one of the commentaries is given in Figure 1.', 'The paper is organized as follows: Section 2 explains the different layers of annotation that have been produced or are being produced.', 'Section 3 discusses the applications that have been completed with PCC, or are under way, or are planned for the future.', 'Section 4 draws some conclusions from the present state of the effort.', 'The corpus has been annotated with six different types of information, which are characterized in the following subsections.', 'Not all the layers have been produced for all the texts yet.', 'There is a â\x80\x98core corpusâ\x80\x99 of ten commentaries, for which the range of information (except for syntax) has been completed; the remaining data has been annotated to different degrees, as explained below.', 'All annotations are done with specific tools and in XML; each layer has its own DTD.', 'This offers the well-known advantages for inter- changability, but it raises the question of how to query the corpus across levels of annotation.', 'We will briefly discuss this point in Section 3.1.', '2.1 Part-of-speech tags.', 'All commentaries have been tagged with part-of-speech information using Brantsâ\x80\x99 TnT1 tagger and the Stuttgart/Tu¨bingen Tag Set automatic analysis was responsible for this decision.)', 'This is manifest in the lexical choices but 1 www.coli.unisb.de/â\x88¼thorsten/tnt/ Dagmar Ziegler is up to her neck in debt.', 'Due to the dramatic fiscal situation in Brandenburg she now surprisingly withdrew legislation drafted more than a year ago, and suggested to decide on it not before 2003.', 'Unexpectedly, because the ministries of treasury and education both had prepared the teacher plan together.', 'This withdrawal by the treasury secretary is understandable, though.', 'It is difficult to motivate these days why one ministry should be exempt from cutbacks â\x80\x94 at the expense of the others.', 'Reicheâ\x80\x99s colleagues will make sure that the concept is waterproof.', 'Indeed there are several open issues.', 'For one thing, it is not clear who is to receive settlements or what should happen in case not enough teachers accept the offer of early retirement.', 'Nonetheless there is no alternative to Reicheâ\x80\x99s plan.', 'The state in future has not enough work for its many teachers.', 'And time is short.', 'The significant drop in number of pupils will begin in the fall of 2003.', 'The government has to make a decision, and do it quickly.', 'Either save money at any cost - or give priority to education.', 'Figure 1: Translation of PCC sample commentary (STTS)2.', '2.2 Syntactic structure.', 'Annotation of syntactic structure for the core corpus has just begun.', 'We follow the guidelines developed in the TIGER project (Brants et al. 2002) for syntactic annotation of German newspaper text, using the Annotate3 tool for interactive construction of tree structures.', '2.3 Rhetorical structure.', 'All commentaries have been annotated with rhetorical structure, using RSTTool4 and the definitions of discourse relations provided by Rhetorical Structure Theory (Mann, Thompson 1988).', 'Two annotators received training with the RST definitions and started the process with a first set of 10 texts, the results of which were intensively discussed and revised.', 'Then, the remaining texts were annotated and cross-validated, always with discussions among the annotators.', 'Thus we opted not to take the step of creating more precise written annotation guidelines (as (Carlson, Marcu 2001) did for English), which would then allow for measuring inter-annotator agreement.', 'The motivation for our more informal approach was the intuition that there are so many open problems in rhetorical analysis (and more so for German than for English; see below) that the main task is qualitative investigation, whereas rigorous quantitative analyses should be performed at a later stage.', 'One conclusion drawn from this annotation effort was that for humans and machines alike, 2 www.sfs.nphil.unituebingen.de/Elwis/stts/ stts.html 3 www.coli.unisb.de/sfb378/negra-corpus/annotate.', 'html 4 www.wagsoft.com/RSTTool assigning rhetorical relations is a process loaded with ambiguity and, possibly, subjectivity.', 'We respond to this on the one hand with a format for its underspecification (see 2.4) and on the other hand with an additional level of annotation that attends only to connectives and their scopes (see 2.5), which is intended as an intermediate step on the long road towards a systematic and objective treatment of rhetorical structure.', '2.4 Underspecified rhetorical structure.', 'While RST (Mann, Thompson 1988) proposed that a single relation hold between adjacent text segments, SDRT (Asher, Lascarides 2003) maintains that multiple relations may hold simultaneously.', 'Within the RST â\x80\x9cuser communityâ\x80\x9d there has also been discussion whether two levels of discourse structure should not be systematically distinguished (intentional versus informational).', 'Some relations are signalled by subordinating conjunctions, which clearly demarcate the range of the text spans related (matrix clause, embedded clause).', 'When the signal is a coordinating conjunction, the second span is usually the clause following the conjunction; the first span is often the clause preceding it, but sometimes stretches further back.', 'When the connective is an adverbial, there is much less clarity as to the range of the spans.', 'Assigning rhetorical relations thus poses questions that can often be answered only subjectively.', 'Our annotators pointed out that very often they made almost random decisions as to what relation to choose, and where to locate the boundary of a span.', '(Carlson, Marcu 2001) responded to this situation with relatively precise (and therefore long!)', 'annotation guidelines that tell annotators what to do in case of doubt.', 'Quite often, though, these directives fulfill the goal of increasing annotator agreement without in fact settling the theoretical question; i.e., the directives are clear but not always very well motivated.', 'In (Reitter, Stede 2003) we went a different way and suggested URML5, an XML format for underspecifying rhetorical structure: a number of relations can be assigned instead of a single one, competing analyses can be represented with shared forests.', 'The rhetorical structure annotations of PCC have all been converted to URML.', 'There are still some open issues to be resolved with the format, but it represents a first step.', 'What ought to be developed now is an annotation tool that can make use of the format, allow for underspecified annotations and visualize them accordingly.', '2.5 Connectives with scopes.', 'For the â\x80\x98coreâ\x80\x99 portion of PCC, we found that on average, 35% of the coherence relations in our RST annotations are explicitly signalled by a lexical connective.6 When adding the fact that connectives are often ambiguous, one has to conclude that prospects for an automatic analysis of rhetorical structure using shallow methods (i.e., relying largely on connectives) are not bright â\x80\x94 but see Sections 3.2 and 3.3 below.', 'Still, for both human and automatic rhetorical analysis, connectives are the most important source of surface information.', 'We thus decided to pay specific attention to them and introduce an annotation layer for connectives and their scopes.', 'This was also inspired by the work on the Penn Discourse Tree Bank7 , which follows similar goals for English.', 'For effectively annotating connectives/scopes, we found that existing annotation tools were not well-suited, for two reasons: â\x80¢ Some tools are dedicated to modes of annotation (e.g., tiers), which could only quite un-intuitively be used for connectives and scopes.', 'â\x80¢ Some tools would allow for the desired annotation mode, but are so complicated (they can be used for many other purposes as well) that annotators take a long time getting used to them.', '5 â\x80\x98Underspecified Rhetorical Markup Languageâ\x80\x99 6 This confirms the figure given by (Schauer, Hahn.', 'Consequently, we implemented our own annotation tool ConAno in Java (Stede, Heintze 2004), which provides specifically the functionality needed for our purpose.', 'It reads a file with a list of German connectives, and when a text is opened for annotation, it highlights all the words that show up in this list; these will be all the potential connectives.', 'The annotator can then â\x80\x9cclick awayâ\x80\x9d those words that are here not used as connectives (such as the conjunction und (â\x80\x98andâ\x80\x99) used in lists, or many adverbials that are ambiguous between connective and discourse particle).', 'Then, moving from connective to connective, ConAno sometimes offers suggestions for its scope (using heuristics like â\x80\x98for sub- junctor, mark all words up to the next comma as the first segmentâ\x80\x99), which the annotator can accept with a mouseclick or overwrite, marking instead the correct scope with the mouse.', 'When finished, the whole material is written into an XML-structured annotation file.', '2.6 Co-reference.', 'We developed a first version of annotation guidelines for co-reference in PCC (Gross 2003), which served as basis for annotating the core corpus but have not been empirically evaluated for inter-annotator agreement yet.', 'The tool we use is MMAX8, which has been specifically designed for marking co-reference.', 'Upon identifying an anaphoric expression (currently restricted to: pronouns, prepositional adverbs, definite noun phrases), the an- notator first marks the antecedent expression (currently restricted to: various kinds of noun phrases, prepositional phrases, verb phrases, sentences) and then establishes the link between the two.', 'Links can be of two different kinds: anaphoric or bridging (definite noun phrases picking up an antecedent via world-knowledge).', 'â\x80¢ Anaphoric links: the annotator is asked to specify whether the anaphor is a repetition, partial repetition, pronoun, epithet (e.g., Andy Warhol â\x80\x93 the PopArt artist), or is-a (e.g., Andy Warhol was often hunted by photographers.', 'This fact annoyed especially his dog...).', 'â\x80¢ Bridging links: the annotator is asked to specify the type as part-whole, cause-effect (e.g., She had an accident.', 'The wounds are still healing.), entity-attribute (e.g., She 2001), who determined that in their corpus of German computer tests, 38% of relations were lexically signalled.', '7 www.cis.upenn.edu/â\x88¼pdtb/ 8 www.eml-research.de/english/Research/NLP/ Downloads had to buy a new car.', 'The price shocked her.), or same-kind (e.g., Her health insurance paid for the hospital fees, but the automobile insurance did not cover the repair.).', 'For displaying and querying the annoated text, we make use of the Annis Linguistic Database developed in our group for a large research effort (â\x80\x98Sonderforschungsbereichâ\x80\x99) revolving around 9 2.7 Information structure.', 'information structure.', 'The implementation is In a similar effort, (G¨otze 2003) developed a proposal for the theory-neutral annotation of information structure (IS) â\x80\x94 a notoriously difficult area with plenty of conflicting and overlapping terminological conceptions.', 'And indeed, converging on annotation guidelines is even more difficult than it is with co-reference.', 'Like in the co-reference annotation, G¨otzeâ\x80\x99s proposal has been applied by two annotators to the core corpus but it has not been systematically evaluated yet.', 'We use MMAX for this annotation as well.', 'Here, annotation proceeds in two phases: first, the domains and the units of IS are marked as such.', 'The domains are the linguistic spans that are to receive an IS-partitioning, and the units are the (smaller) spans that can play a role as a constituent of such a partitioning.', 'Among the IS-units, the referring expressions are marked as such and will in the second phase receive a label for cognitive status (active, accessible- text, accessible-situation, inferrable, inactive).', 'They are also labelled for their topicality (yes / no), and this annotation is accompanied by a confidence value assigned by the annotator (since it is a more subjective matter).', 'Finally, the focus/background partition is annotated, together with the focus question that elicits the corresponding answer.', 'Asking the annotator to also formulate the question is a way of arriving at more reproducible decisions.', 'For all these annotation taks, G¨otze developed a series of questions (essentially a decision tree) designed to lead the annotator to the ap propriate judgement.', 'Having explained the various layers of annotation in PCC, we now turn to the question what all this might be good for.', 'This concerns on the one hand the basic question of retrieval, i.e. searching for information across the annotation layers (see 3.1).', 'On the other hand, we are interested in the application of rhetorical analysis or â\x80\x98discourse parsingâ\x80\x99 (3.2 and 3.3), in text generation (3.4), and in exploiting the corpus for the development of improved models of discourse structure (3.5).', 'basically complete, yet some improvements and extensions are still under way.', 'The web-based Annis imports data in a variety of XML formats and tagsets and displays it in a tier-orientedway (optionally, trees can be drawn more ele gantly in a separate window).', 'Figure 2 shows a screenshot (which is of somewhat limited value, though, as color plays a major role in signalling the different statuses of the information).', 'In the small window on the left, search queries can be entered, here one for an NP that has been annotated on the co-reference layer as bridging.', 'The portions of information in the large window can be individually clicked visible or invisible; here we have chosen to see (from top to bottom) â\x80¢ the full text, â\x80¢ the annotation values for the activated annotation set (co-reference), â\x80¢ the actual annotation tiers, and â\x80¢ the portion of text currently â\x80\x98in focusâ\x80\x99 (which also appears underlined in the full text).', 'Different annotations of the same text are mapped into the same data structure, so that search queries can be formulated across annotation levels.', 'Thus it is possible, for illustration, to look for a noun phrase (syntax tier) marked as topic (information structure tier) that is in a bridging relation (co-reference tier) to some other noun phrase.', '3.2 Stochastic rhetorical analysis.', 'In an experiment on automatic rhetorical parsing, the RST-annotations and PoS tags were used by (Reitter 2003) as a training corpus for statistical classification with Support Vector Machines.', 'Since 170 annotated texts constitute a fairly small training set, Reitter found that an overall recognition accuracy of 39% could be achieved using his method.', 'For the English RST-annotated corpus that is made available via LDC, his corresponding result is 62%.', 'Future work along these lines will incorporate other layers of annotation, in particular the syntax information.', '9 www.ling.unipotsdam.de/sfb/ Figure 2: Screenshot of Annis Linguistic Database 3.3 Symbolic and knowledge-based.', 'rhetorical analysis We are experimenting with a hybrid statistical and knowledge-based system for discourse parsing and summarization (Stede 2003), (Hanneforth et al. 2003), again targeting the genre of commentaries.', 'The idea is to have a pipeline of shallow-analysis modules (tagging, chunk- ing, discourse parsing based on connectives) and map the resulting underspecified rhetorical tree (see Section 2.4) into a knowledge base that may contain domain and world knowledge for enriching the representation, e.g., to resolve references that cannot be handled by shallow methods, or to hypothesize coherence relations.', 'In the rhetorical tree, nuclearity information is then used to extract a â\x80\x9ckernel treeâ\x80\x9d that supposedly represents the key information from which the summary can be generated (which in turn may involve co-reference information, as we want to avoid dangling pronouns in a summary).', 'Thus we are interested not in extraction, but actual generation from representations that may be developed to different degrees of granularity.', 'In order to evaluate and advance this approach, it helps to feed into the knowledge base data that is already enriched with some of the desired information â\x80\x94 as in PCC.', 'That is, we can use the discourse parser on PCC texts, emulating for instance a â\x80\x9cco-reference oracleâ\x80\x9d that adds the information from our co-reference annotations.', 'The knowledge base then can be tested for its relation-inference capabilities on the basis of full-blown co-reference information.', 'Conversely, we can use the full rhetorical tree from the annotations and tune the co-reference module.', 'The general idea for the knowledge- based part is to have the system use as much information as it can find at its disposal to produce a target representation as specific as possible and as underspecified as necessary.', 'For developing these mechanisms, the possibility to feed in hand-annotated information is very useful.', '3.4 Salience-based text generation.', 'Text generation, or at least the two phases of text planning and sentence planning, is a process driven partly by well-motivated choices (e.g., use this lexeme X rather than that more colloquial near-synonym Y ) and partly by con tation like that of PCC can be exploited to look for correlations in particular between syntactic structure, choice of referring expressions, and sentence-internal information structure.', 'A different but supplementary perspective on discourse-based information structure is taken 11ventionalized patterns (e.g., order of informa by one of our partner projects, which is inter tion in news reports).', 'And then there are decisions that systems typically hard-wire, because the linguistic motivation for making them is not well understood yet.', 'Preferences for constituent order (especially in languages with relatively free word order) often belong to this group.', 'Trying to integrate constituent ordering and choice of referring expressions, (Chiarcos 2003) developed a numerical model of salience propagation that captures various factors of authorâ\x80\x99s intentions and of information structure for ordering sentences as well as smaller constituents, and picking appropriate referring expressions.10 Chiarcos used the PCC annotations of co-reference and information structure to compute his numerical models for salience projection across the generated texts.', '3.5 Improved models of discourse.', 'structure Besides the applications just sketched, the over- arching goal of developing the PCC is to build up an empirical basis for investigating phenomena of discourse structure.', 'One key issue here is to seek a discourse-based model of information structure.', 'Since DaneË\x87sâ\x80\x99 proposals of â\x80\x98thematic development patternsâ\x80\x99, a few suggestions have been made as to the existence of a level of discourse structure that would predict the information structure of sentences within texts.', '(Hartmann 1984), for example, used the term Reliefgebung to characterize the distibution of main and minor information in texts (similar to the notion of nuclearity in RST).', '(Brandt 1996) extended these ideas toward a conception of kommunikative Gewichtung (â\x80\x98communicative-weight assignmentâ\x80\x99).', 'A different notion of information structure, is used in work such as that of (?), who tried to characterize felicitous constituent ordering (theme choice, in particular) that leads to texts presenting information in a natural, â\x80\x9cflowingâ\x80\x9d way rather than with abrupt shifts of attention.', 'â\x80\x94ested in correlations between prosody and dis course structure.', 'A number of PCC commentaries will be read by professional news speakers and prosodic features be annotated, so that the various annotation layers can be set into correspondence with intonation patterns.', 'In focus is in particular the correlation with rhetorical structure, i.e., the question whether specific rhetorical relations â\x80\x94 or groups of relations in particular configurations â\x80\x94 are signalled by speakers with prosodic means.', 'Besides information structure, the second main goal is to enhance current models of rhetorical structure.', 'As already pointed out in Section 2.4, current theories diverge not only on the number and definition of relations but also on apects of structure, i.e., whether a tree is sufficient as a representational device or general graphs are required (and if so, whether any restrictions can be placed on these graphâ\x80\x99s structures â\x80\x94 cf.', '(Webber et al., 2003)).', 'Again, the idea is that having a picture of syntax, co-reference, and sentence-internal information structure at oneâ\x80\x99s disposal should aid in finding models of discourse structure that are more explanatory and can be empirically supported.', 'The PCC is not the result of a funded project.', 'Instead, the designs of the various annotation layers and the actual annotation work are results of a series of diploma theses, of studentsâ\x80\x99 work in course projects, and to some extent of paid assistentships.', 'This means that the PCC cannot grow particularly quickly.', 'After the first step towards breadth had been taken with the PoS-tagging, RST annotation, and URML conversion of the entire corpus of 170 texts12 , emphasis shifted towards depth.', 'Hence we decided to select ten commentaries to form a â\x80\x98core corpusâ\x80\x99, for which the entire range of annotation levels was realized, so that experiments with multi-level querying could commence.', 'Cur In order to ground such approaches in linguistic observation and description, a multi-level anno 10 For an exposition of the idea as applied to the task of text planning, see (Chiarcos, Stede 2004).', '11 www.ling.unipotsdam.de/sfb/projekt a3.php 12 This step was carried out in the course of the diploma thesis work of David Reitter (2003), which de serves special mention here.', 'rently, some annotations (in particular the connectives and scopes) have already moved beyond the core corpus; the others will grow step by step.', 'The kind of annotation work presented here would clearly benefit from the emergence of standard formats and tag sets, which could lead to sharable resources of larger size.', 'Clearly this poses a number of research challenges, though, such as the applicability of tag sets across different languages.', 'Nonetheless, the prospect of a network of annotated discourse resources seems particularly promising if not only a single annotation layer is used but a whole variety of them, so that a systematic search for correlations between them becomes possible, which in turn can lead to more explanatory models of discourse structure.']",abstractive -W99-0613_vardha,W99-0613,4,27,The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).,The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).,"['Unsupervised Models for Named Entity Classification Collins', 'This paper discusses the use of unlabeled examples for the problem of named entity classification.', 'A large number of rules is needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classi- However, we show that the use of data can reduce the requirements for supervision to just 7 simple "seed" rules.', 'The approach gains leverage from natural redundancy in the data: for many named-entity instances both the spelling of the name and the context in which it appears are sufficient to determine its type.', 'We present two algorithms.', 'The first method uses a similar algorithm to that of (Yarowsky 95), with modifications motivated by (Blum and Mitchell 98).', 'The second algorithm extends ideas from boosting algorithms, designed for supervised learning tasks, to the framework suggested by (Blum and Mitchell 98).', 'Many statistical or machine-learning approaches for natural language problems require a relatively large amount of supervision, in the form of labeled training examples.', 'Recent results (e.g., (Yarowsky 95; Brill 95; Blum and Mitchell 98)) have suggested that unlabeled data can be used quite profitably in reducing the need for supervision.', 'This paper discusses the use of unlabeled examples for the problem of named entity classification.', 'The task is to learn a function from an input string (proper name) to its type, which we will assume to be one of the categories Person, Organization, or Location.', 'For example, a good classifier would identify Mrs. Frank as a person, Steptoe & Johnson as a company, and Honduras as a location.', 'The approach uses both spelling and contextual rules.', 'A spelling rule might be a simple look-up for the string (e.g., a rule that Honduras is a location) or a rule that looks at words within a string (e.g., a rule that any string containing Mr. is a person).', 'A contextual rule considers words surrounding the string in the sentence in which it appears (e.g., a rule that any proper name modified by an appositive whose head is president is a person).', 'The task can be considered to be one component of the MUC (MUC-6, 1995) named entity task (the other task is that of segmentation, i.e., pulling possible people, places and locations from text before sending them to the classifier).', 'Supervised methods have been applied quite successfully to the full MUC named-entity task (Bikel et al. 97).', 'At first glance, the problem seems quite complex: a large number of rules is needed to cover the domain, suggesting that a large number of labeled examples is required to train an accurate classifier.', 'But we will show that the use of unlabeled data can drastically reduce the need for supervision.', 'Given around 90,000 unlabeled examples, the methods described in this paper classify names with over 91% accuracy.', 'The only supervision is in the form of 7 seed rules (namely, that New York, California and U.S. are locations; that any name containing Mr is a person; that any name containing Incorporated is an organization; and that I.B.M. and Microsoft are organizations).', 'The key to the methods we describe is redundancy in the unlabeled data.', 'In many cases, inspection of either the spelling or context alone is sufficient to classify an example.', 'For example, in .., says Mr. Cooper, a vice president of.. both a spelling feature (that the string contains Mr.) and a contextual feature (that president modifies the string) are strong indications that Mr. Cooper is of type Person.', 'Even if an example like this is not labeled, it can be interpreted as a "hint" that Mr and president imply the same category.', 'The unlabeled data gives many such "hints" that two features should predict the same label, and these hints turn out to be surprisingly useful when building a classifier.', 'We present two algorithms.', 'The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).', '(Yarowsky 95) describes an algorithm for word-sense disambiguation that exploits redundancy in contextual features, and gives impressive performance.', ""Unfortunately, Yarowsky's method is not well understood from a theoretical viewpoint: we would like to formalize the notion of redundancy in unlabeled data, and set up the learning task as optimization of some appropriate objective function."", '(Blum and Mitchell 98) offer a promising formulation of redundancy, also prove some results about how the use of unlabeled examples can help classification, and suggest an objective function when training with unlabeled examples.', ""Our first algorithm is similar to Yarowsky's, but with some important modifications motivated by (Blum and Mitchell 98)."", 'The algorithm can be viewed as heuristically optimizing an objective function suggested by (Blum and Mitchell 98); empirically it is shown to be quite successful in optimizing this criterion.', 'The second algorithm builds on a boosting algorithm called AdaBoost (Freund and Schapire 97; Schapire and Singer 98).', 'The AdaBoost algorithm was developed for supervised learning.', 'AdaBoost finds a weighted combination of simple (weak) classifiers, where the weights are chosen to minimize a function that bounds the classification error on a set of training examples.', 'Roughly speaking, the new algorithm presented in this paper performs a similar search, but instead minimizes a bound on the number of (unlabeled) examples on which two classifiers disagree.', 'The algorithm builds two classifiers iteratively: each iteration involves minimization of a continuously differential function which bounds the number of examples on which the two classifiers disagree.', 'There has been additional recent work on inducing lexicons or other knowledge sources from large corpora.', '(Brin 98) ,describes a system for extracting (author, book-title) pairs from the World Wide Web using an approach that bootstraps from an initial seed set of examples.', '(Berland and Charniak 99) describe a method for extracting parts of objects from wholes (e.g., "speedometer" from "car") from a large corpus using hand-crafted patterns.', '(Hearst 92) describes a method for extracting hyponyms from a corpus (pairs of words in "isa" relations).', '(Riloff and Shepherd 97) describe a bootstrapping approach for acquiring nouns in particular categories (such as "vehicle" or "weapon" categories).', 'The approach builds from an initial seed set for a category, and is quite similar to the decision list approach described in (Yarowsky 95).', 'More recently, (Riloff and Jones 99) describe a method they term "mutual bootstrapping" for simultaneously constructing a lexicon and contextual extraction patterns.', 'The method shares some characteristics of the decision list algorithm presented in this paper.', '(Riloff and Jones 99) was brought to our attention as we were preparing the final version of this paper.', '971,746 sentences of New York Times text were parsed using the parser of (Collins 96).1 Word sequences that met the following criteria were then extracted as named entity examples: whose head is a singular noun (tagged NN).', 'For example, take ..., says Maury Cooper, a vice president at S.&P.', 'In this case, Maury Cooper is extracted.', 'It is a sequence of proper nouns within an NP; its last word Cooper is the head of the NP; and the NP has an appositive modifier (a vice president at S.&P.) whose head is a singular noun (president).', '2.', 'The NP is a complement to a preposition, which is the head of a PP.', 'This PP modifies another NP, whose head is a singular noun.', 'For example, ... fraud related to work on a federally funded sewage plant in Georgia In this case, Georgia is extracted: the NP containing it is a complement to the preposition in; the PP headed by in modifies the NP a federally funded sewage plant, whose head is the singular noun plant.', 'In addition to the named-entity string (Maury Cooper or Georgia), a contextual predictor was also extracted.', 'In the appositive case, the contextual predictor was the head of the modifying appositive (president in the Maury Cooper example); in the second case, the contextual predictor was the preposition together with the noun it modifies (plant_in in the Georgia example).', 'From here on we will refer to the named-entity string itself as the spelling of the entity, and the contextual predicate as the context.', 'Having found (spelling, context) pairs in the parsed data, a number of features are extracted.', 'The features are used to represent each example for the learning algorithm.', 'In principle a feature could be an arbitrary predicate of the (spelling, context) pair; for reasons that will become clear, features are limited to querying either the spelling or context alone.', 'The following features were used: full-string=x The full string (e.g., for Maury Cooper, full- s tring=Maury_Cooper). contains(x) If the spelling contains more than one word, this feature applies for any words that the string contains (e.g., Maury Cooper contributes two such features, contains (Maury) and contains (Cooper) . allcapl This feature appears if the spelling is a single word which is all capitals (e.g., IBM would contribute this feature). allcap2 This feature appears if the spelling is a single word which is all capitals or full periods, and contains at least one period.', '(e.g., N.Y. would contribute this feature, IBM would not). nonalpha=x Appears if the spelling contains any characters other than upper or lower case letters.', 'In this case nonalpha is the string formed by removing all upper/lower case letters from the spelling (e.g., for Thomas E. Petry nonalpha= .', ', for A. T.&T. nonalpha.. .', '.', '). context=x The context for the entity.', 'The', 'The first unsupervised algorithm we describe is based on the decision list method from (Yarowsky 95).', 'Before describing the unsupervised case we first describe the supervised version of the algorithm: Input to the learning algorithm: n labeled examples of the form (xi, y„). y, is the label of the ith example (given that there are k possible labels, y, is a member of y = {1 ... 0). xi is a set of mi features {x,1, Xi2 .', '.', '.', 'Xim, } associated with the ith example.', 'Each xii is a member of X, where X is a set of possible features.', 'Output of the learning algorithm: a function h:Xxy [0, 1] where h(x, y) is an estimate of the conditional probability p(y1x) of seeing label y given that feature x is present.', 'Alternatively, h can be thought of as defining a decision list of rules x y ranked by their "strength" h(x, y).', 'The label for a test example with features x is then defined as In this paper we define h(x, y) as the following function of counts seen in training data: Count(x,y) is the number of times feature x is seen with label y in training data, Count(x) = EyEy Count(x, y). a is a smoothing parameter, and k is the number of possible labels.', 'In this paper k = 3 (the three labels are person, organization, location), and we set a = 0.1.', 'Equation 2 is an estimate of the conditional probability of the label given the feature, P(yjx).', '2 We now introduce a new algorithm for learning from unlabeled examples, which we will call DLCoTrain (DL stands for decision list, the term Cotrain is taken from (Blum and Mitchell 98)).', 'The 2(Yarowsky 95) describes the use of more sophisticated smoothing methods.', ""It's not clear how to apply these methods in the unsupervised case, as they required cross-validation techniques: for this reason we use the simpler smoothing method shown here. input to the unsupervised algorithm is an initial, "seed" set of rules."", 'In the named entity domain these rules were Each of these rules was given a strength of 0.9999.', ""The following algorithm was then used to induce new rules: Let Count' (x) be the number of times feature x is seen with some known label in the training data."", ""For each label (Per s on, organization and Location), take the n contextual rules with the highest value of Count' (x) whose unsmoothed3 strength is above some threshold pmin."", '(If fewer than n rules have Precision greater than pin, we 3Note that taking tlie top n most frequent rules already makes the method robut to low count events, hence we do not use smoothing, allowing low-count high-precision features to be chosen on later iterations. keep only those rules which exceed the precision threshold.) pm,n was fixed at 0.95 in all experiments in this paper.', 'Thus at each iteration the method induces at most n x k rules, where k is the number of possible labels (k = 3 in the experiments in this paper). step 3.', 'Otherwise, label the training data with the combined spelling/contextual decision list, then induce a final decision list from the labeled examples where all rules (regardless of strength) are added to the decision list.', 'We can now compare this algorithm to that of (Yarowsky 95).', ""The core of Yarowsky's algorithm is as follows: where h is defined by the formula in equation 2, with counts restricted to training data examples that have been labeled in step 2."", 'Set the decision list to include all rules whose (smoothed) strength is above some threshold Pmin.', 'There are two differences between this method and the DL-CoTrain algorithm: spelling and contextual features, alternating between labeling and learning with the two types of features.', 'Thus an explicit assumption about the redundancy of the features — that either the spelling or context alone should be sufficient to build a classifier — has been built into the algorithm.', 'To measure the contribution of each modification, a third, intermediate algorithm, Yarowsky-cautious was also tested.', 'Yarowsky-cautious does not separate the spelling and contextual features, but does have a limit on the number of rules added at each stage.', '(Specifically, the limit n starts at 5 and increases by 5 at each iteration.)', 'The first modification — cautiousness — is a relatively minor change.', 'It was motivated by the observation that the (Yarowsky 95) algorithm added a very large number of rules in the first few iterations.', 'Taking only the highest frequency rules is much "safer", as they tend to be very accurate.', 'This intuition is born out by the experimental results.', 'The second modification is more important, and is discussed in the next section.', 'An important reason for separating the two types of features is that this opens up the possibility of theoretical analysis of the use of unlabeled examples.', '(Blum and Mitchell 98) describe learning in the following situation: X = X1 X X2 where X1 and X2 correspond to two different "views" of an example.', 'In the named entity task, X1 might be the instance space for the spelling features, X2 might be the instance space for the contextual features.', 'By this assumption, each element x E X can also be represented as (xi, x2) E X1 x X2.', 'Thus the method makes the fairly strong assumption that the features can be partitioned into two types such that each type alone is sufficient for classification.', 'Now assume we have n pairs (xi,, x2,i) drawn from X1 X X2, where the first m pairs have labels whereas for i = m+ 1...n the pairs are unlabeled.', 'In a fully supervised setting, the task is to learn a function f such that for all i = 1...m, f (xi,i, 12,i) = yz.', 'In the cotraining case, (Blum and Mitchell 98) argue that the task should be to induce functions Ii and f2 such that So Ii and 12 must (1) correctly classify the labeled examples, and (2) must agree with each other on the unlabeled examples.', 'The key point is that the second constraint can be remarkably powerful in reducing the complexity of the learning problem.', '(Blum and Mitchell 98) give an example that illustrates just how powerful the second constraint can be.', 'Consider the case where IX].', 'I = 1X21 N and N is a "medium" sized number so that it is feasible to collect 0(N) unlabeled examples.', 'Assume that the two classifiers are "rote learners": that is, 1.1 and 12 are defined through look-up tables that list a label for each member of X1 or X2.', 'The problem is a binary classification problem.', 'The problem can be represented as a graph with 2N vertices corresponding to the members of X1 and X2.', 'Each unlabeled pair (x1,i, x2,i) is represented as an edge between nodes corresponding to x1,i and X2,i in the graph.', 'An edge indicates that the two features must have the same label.', 'Given a sufficient number of randomly drawn unlabeled examples (i.e., edges), we will induce two completely connected components that together span the entire graph.', 'Each vertex within a connected component must have the same label — in the binary classification case, we need a single labeled example to identify which component should get which label.', '(Blum and Mitchell 98) go on to give PAC results for learning in the cotraining case.', 'They also describe an application of cotraining to classifying web pages (the to feature sets are the words on the page, and other pages pointing to the page).', 'The method halves the error rate in comparison to a method using the labeled examples alone.', 'Limitations of (Blum and Mitchell 98): While the assumptions of (Blum and Mitchell 98) are useful in developing both theoretical results and an intuition for the problem, the assumptions are quite limited.', 'In particular, it may not be possible to learn functions fi (x f2(x2,t) for i = m + 1...n: either because there is some noise in the data, or because it is just not realistic to expect to learn perfect classifiers given the features used for representation.', 'It may be more realistic to replace the second criteria with a softer one, for example (Blum and Mitchell 98) suggest the alternative Alternatively, if Ii and 12 are probabilistic learners, it might make sense to encode the second constraint as one of minimizing some measure of the distance between the distributions given by the two learners.', ""The question of what soft function to pick, and how to design' algorithms which optimize it, is an open question, but appears to be a promising way of looking at the problem."", 'The DL-CoTrain algorithm can be motivated as being a greedy method of satisfying the above 2 constraints.', 'At each iteration the algorithm increases the number of rules, while maintaining a high level of agreement between the spelling and contextual decision lists.', 'Inspection of the data shows that at n = 2500, the two classifiers both give labels on 44,281 (49.2%) of the unlabeled examples, and give the same label on 99.25% of these cases.', 'So the success of the algorithm may well be due to its success in maximizing the number of unlabeled examples on which the two decision lists agree.', 'In the next section we present an alternative approach that builds two classifiers while attempting to satisfy the above constraints as much as possible.', 'The algorithm, called CoBoost, has the advantage of being more general than the decision-list learning alInput: (xi , yi), , (xim, ) ; x, E 2x, yi = +1 Initialize Di (i) = 1/m.', 'Fort= 1,...,T:', 'This section describes an algorithm based on boosting algorithms, which were previously developed for supervised machine learning problems.', 'We first give a brief overview of boosting algorithms.', 'We then discuss how we adapt and generalize a boosting algorithm, AdaBoost, to the problem of named entity classification.', 'The new algorithm, which we call CoBoost, uses labeled and unlabeled data and builds two classifiers in parallel.', ""(We would like to note though that unlike previous boosting algorithms, the CoBoost algorithm presented here is not a boosting algorithm under Valiant's (Valiant 84) Probably Approximately Correct (PAC) model.)"", 'This section describes AdaBoost, which is the basis for the CoBoost algorithm.', 'AdaBoost was first introduced in (Freund and Schapire 97); (Schapire and Singer 98) gave a generalization of AdaBoost which we will use in this paper.', 'For a description of the application of AdaBoost to various NLP problems see the paper by Abney, Schapire, and Singer in this volume.', 'The input to AdaBoost is a set of training examples ((xi , yi), , (x„.„ yrn)).', 'Each xt E 2x is the set of features constituting the ith example.', 'For the moment we will assume that there are only two possible labels: each y, is in { —1, +1}.', 'AdaBoost is given access to a weak learning algorithm, which accepts as input the training examples, along with a distribution over the instances.', 'The distribution specifies the relative weight, or importance, of each example — typically, the weak learner will attempt to minimize the weighted error on the training set, where the distribution specifies the weights.', 'The weak learner for two-class problems computes a weak hypothesis h from the input space into the reals (h : 2x -4 R), where the sign4 of h(x) is interpreted as the predicted label and the magnitude I h(x)I is the confidence in the prediction: large numbers for I h(x)I indicate high confidence in the prediction, and numbers close to zero indicate low confidence.', 'The weak hypothesis can abstain from predicting the label of an instance x by setting h(x) = 0.', 'The final strong hypothesis, denoted 1(x), is then the sign of a weighted sum of the weak hypotheses, 1(x) = sign (Vii atht(x)), where the weights at are determined during the run of the algorithm, as we describe below.', 'Pseudo-code describing the generalized boosting algorithm of Schapire and Singer is given in Figure 1.', 'Note that Zt is a normalization constant that ensures the distribution Dt+i sums to 1; it is a function of the weak hypothesis ht and the weight for that hypothesis at chosen at the tth round.', 'The normalization factor plays an important role in the AdaBoost algorithm.', 'Schapire and Singer show that the training error is bounded above by Thus, in order to greedily minimize an upper bound on training error, on each iteration we should search for the weak hypothesis ht and the weight at that minimize Z.', 'In our implementation, we make perhaps the simplest choice of weak hypothesis.', 'Each ht is a function that predicts a label (+1 or —1) on examples containing a particular feature xt, while abstaining on other examples: The prediction of the strong hypothesis can then be written as We now briefly describe how to choose ht and at at each iteration.', 'Our derivation is slightly different from the one presented in (Schapire and Singer 98) as we restrict at to be positive.', 'Zt can be written as follows Following the derivation of Schapire and Singer, providing that W+ > W_, Equ.', '(4) is minimized by setting Since a feature may be present in only a few examples, W_ can be in practice very small or even 0, leading to extreme confidence values.', 'To prevent this we "smooth" the confidence by adding a small value, e, to both W+ and W_, giving at = Plugging the value of at from Equ.', '(5) and ht into Equ.', '(4) gives In order to minimize Zt, at each iteration the final algorithm should choose the weak hypothesis (i.e., a feature xt) which has values for W+ and W_ that minimize Equ.', '(6), with W+ > W_.', 'We now describe the CoBoost algorithm for the named entity problem.', 'Following the convention presented in earlier sections, we assume that each example is an instance pair of the from (xi ,i, x2,) where xj,, E 2x3 , j E 2}.', 'In the namedentity problem each example is a (spelling,context) pair.', 'The first m pairs have labels yi, whereas for i = m + 1, , n the pairs are unlabeled.', 'We make the assumption that for each example, both xi,. and x2,2 alone are sufficient to determine the label yi.', 'The learning task is to find two classifiers : 2x1 { —1, +1} 12 : 2x2 { —1, +1} such that (x1,) = f2(x2,t) = yt for examples i = 1, , m, and f1 (x1,) = f2 (x2,t) as often as possible on examples i = m + 1, ,n. To achieve this goal we extend the auxiliary function that bounds the training error (see Equ.', '(3)) to be defined over unlabeled as well as labeled instances.', 'Denote by g3(x) = Et crithl(x) , j E {1,2} the unthresholded strong-hypothesis (i.e., f3 (x) = sign(gi (x))).', 'We define the following function: If Zco is small, then it follows that the two classifiers must have a low error rate on the labeled examples, and that they also must give the same label on a large number of unlabeled instances.', 'To see this, note thai the first two terms in the above equation correspond to the function that AdaBoost attempts to minimize in the standard supervised setting (Equ.', '(3)), with one term for each classifier.', 'The two new terms force the two classifiers to agree, as much as possible, on the unlabeled examples.', 'Put another way, the minimum of Equ.', '(7) is at 0 when: 1) Vi : sign(gi (xi)) = sign(g2 (xi)); 2) Ig3(xi)l oo; and 3) sign(gi (xi)) = yi for i = 1, , m. In fact, Zco provides a bound on the sum of the classification error of the labeled examples and the number of disagreements between the two classifiers on the unlabeled examples.', 'Formally, let el (62) be the number of classification errors of the first (second) learner on the training data, and let Eco be the number of unlabeled examples on which the two classifiers disagree.', 'Then, it can be verified that We can now derive the CoBoost algorithm as a means of minimizing Zco.', 'The algorithm builds two classifiers in parallel from labeled and unlabeled data.', 'As in boosting, the algorithm works in rounds.', 'Each round is composed of two stages; each stage updates one of the classifiers while keeping the other classifier fixed.', 'Denote the unthresholded classifiers after t — 1 rounds by git—1 and assume that it is the turn for the first classifier to be updated while the second one is kept fixed.', 'We first define "pseudo-labels",-yt, as follows: = Yi t sign(g 0\\ 2— kx2,m < i < n Thus the first m labels are simply copied from the labeled examples, while the remaining (n — m) examples are taken as the current output of the second classifier.', 'We can now add a new weak hypothesis 14 based on a feature in X1 with a confidence value al hl and atl are chosen to minimize the function We now define, for 1