summary_id,paper_id,source_sid,target_sid,source_text,target_text,target_doc,strategy J96-3004,J96-3004,2,67,The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.,"A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive D10-1083,D10-1083,5,37,There are clustering approaches that assign a single POS tag to each word type.,Global features are extracted from other occurrences of the same token in the whole document.,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive C00-2123,C00-2123,6,39,"In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.","These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W06-3114_swastika,W06-3114,7,175,They found replacing it with a ranked evaluation to be more suitable.,"By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W04-0213,W04-0213,3,6,"the ""Potsdam Commentary Corpus"" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.","Local features are features that are based on neighboring tokens, as well as the token itself.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive P11-1061_swastika,P11-1061,2,2,"Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.","A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W99-0613_vardha,W99-0613,7,236,"We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.","By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive P05-1013_vardha,P05-1013,3,13,The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.,"We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W06-3114_swastika,W06-3114,2,171,"While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.","A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W04-0213,W04-0213,8,28,The texts were annotated with the RSTtool.,"To our knowledge, ours is the first analysis of this kind for Arabic parsing.","['Better Arabic Parsing: Baselines, Evaluations, and Analysis', 'In this paper, we offer broad insight into the underperformance of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.', 'First, we identify sources of syntactic ambiguity understudied in the existing parsing literature.', 'Second, we show that although the Penn Arabic Treebank is similar to other tree- banks in gross statistical terms, annotation consistency remains problematic.', 'Third, we develop a human interpretable grammar that is competitive with a latent variable PCFG.', 'Fourth, we show how to build better models for three different parsers.', 'Finally, we show that in application settings, the absence of gold segmentation lowers parsing performance by 2â\x80\x935% F1.', 'It is well-known that constituency parsing models designed for English often do not generalize easily to other languages and treebanks.1 Explanations for this phenomenon have included the relative informativeness of lexicalization (Dubey and Keller, 2003; Arun and Keller, 2005), insensitivity to morphology (Cowan and Collins, 2005; Tsarfaty and Simaâ\x80\x99an, 2008), and the effect of variable word order (Collins et al., 1999).', 'Certainly these linguistic factors increase the difficulty of syntactic disambiguation.', 'Less frequently studied is the interplay among language, annotation choices, and parsing model design (Levy and Manning, 2003; Ku¨ bler, 2005).', '1 The apparent difficulty of adapting constituency models to non-configurational languages has been one motivation for dependency representations (HajicË\x87 and Zema´nek, 2004; Habash and Roth, 2009).', 'To investigate the influence of these factors, we analyze Modern Standard Arabic (henceforth MSA, or simply â\x80\x9cArabicâ\x80\x9d) because of the unusual opportunity it presents for comparison to English parsing results.', 'The Penn Arabic Treebank (ATB) syntactic guidelines (Maamouri et al., 2004) were purposefully borrowed without major modification from English (Marcus et al., 1993).', 'Further, Maamouri and Bies (2004) argued that the English guidelines generalize well to other languages.', 'But Arabic contains a variety of linguistic phenomena unseen in English.', 'Crucially, the conventional orthographic form of MSA text is unvocalized, a property that results in a deficient graphical representation.', 'For humans, this characteristic can impede the acquisition of literacy.', 'How do additional ambiguities caused by devocalization affect statistical learning?', 'How should the absence of vowels and syntactic markers influence annotation choices and grammar development?', 'Motivated by these questions, we significantly raise baselines for three existing parsing models through better grammar engineering.', 'Our analysis begins with a description of syntactic ambiguity in unvocalized MSA text (§2).', 'Next we show that the ATB is similar to other tree- banks in gross statistical terms, but that annotation consistency remains low relative to English (§3).', 'We then use linguistic and annotation insights to develop a manually annotated grammar for Arabic (§4).', 'To facilitate comparison with previous work, we exhaustively evaluate this grammar and two other parsing models when gold segmentation is assumed (§5).', 'Finally, we provide a realistic eval uation in which segmentation is performed both in a pipeline and jointly with parsing (§6).', 'We quantify error categories in both evaluation settings.', 'To our knowledge, ours is the first analysis of this kind for Arabic parsing.', 'Arabic is a morphologically rich language with a root-and-pattern system similar to other Semitic languages.', 'The basic word order is VSO, but SVO, VOS, and VO configurations are also possible.2 Nouns and verbs are created by selecting a consonantal root (usually triliteral or quadriliteral), which bears the semantic core, and adding affixes and diacritics.', 'Particles are uninflected.', ""Word Head Of Complement POS 1 '01 inna â\x80\x9cIndeed, trulyâ\x80\x9d VP Noun VBP 2 '01 anna â\x80\x9cThatâ\x80\x9d SBAR Noun IN 3 01 in â\x80\x9cIfâ\x80\x9d SBAR Verb IN 4 01 an â\x80\x9ctoâ\x80\x9d SBAR Verb IN Table 1: Diacritized particles and pseudo-verbs that, after orthographic normalization, have the equivalent surface form 0 an."", 'The distinctions in the ATB are linguistically justified, but complicate parsing.', 'Table 8a shows that the best model recovers SBAR at only 71.0% F1.', 'Diacritics can also be used to specify grammatical relations such as case and gender.', 'But diacritics are not present in unvocalized text, which is the standard form of, e.g., news media documents.3 VBD she added VP PUNC S VP VBP NP ...', 'VBD she added VP PUNC â\x80\x9c SBAR IN NP 0 NN.', 'Let us consider an example of ambiguity caused by devocalization.', 'Table 1 shows four words â\x80\x9c 0 Indeed NN Indeed Saddamwhose unvocalized surface forms 0 an are indistinguishable.', 'Whereas Arabic linguistic theory as Saddam (a) Reference (b) Stanford signs (1) and (2) to the class of pseudo verbs 01 +i J>1� inna and her sisters since they can beinflected, the ATB conventions treat (2) as a com plementizer, which means that it must be the head of SBAR.', 'Because these two words have identical complements, syntax rules are typically unhelpful for distinguishing between them.', 'This is especially true in the case of quotationsâ\x80\x94which are common in the ATBâ\x80\x94where (1) will follow a verb like (2) (Figure 1).', 'Even with vocalization, there are linguistic categories that are difficult to identify without semantic clues.', 'Two common cases are the attribu tive adjective and the process nominal _; maSdar, which can have a verbal reading.4 At tributive adjectives are hard because they are or- thographically identical to nominals; they are inflected for gender, number, case, and definiteness.', 'Moreover, they are used as substantives much 2 Unlike machine translation, constituency parsing is not significantly affected by variable word order.', 'However, when grammatical relations like subject and object are evaluated, parsing performance drops considerably (Green et al., 2009).', 'In particular, the decision to represent arguments in verb- initial clauses as VP internal makes VSO and VOS configurations difficult to distinguish.', 'Topicalization of NP subjects in SVO configurations causes confusion with VO (pro-drop).', '3 Techniques for automatic vocalization have been studied (Zitouni et al., 2006; Habash and Rambow, 2007).', 'However, the data sparsity induced by vocalization makes it difficult to train statistical models on corpora of the size of the ATB, so vocalizing and then parsing may well not help performance.', ""4 Traditional Arabic linguistic theory treats both of these types as subcategories of noun � '.i . Figure 1: The Stanford parser (Klein and Manning, 2002) is unable to recover the verbal reading of the unvocalized surface form 0 an (Table 1)."", 'more frequently than is done in English.', 'Process nominals name the action of the transitive or ditransitive verb from which they derive.', 'The verbal reading arises when the maSdar has an NP argument which, in vocalized text, is marked in the accusative case.', 'When the maSdar lacks a determiner, the constituent as a whole resem bles the ubiquitous annexation construct � ?f iDafa.', 'Gabbard and Kulick (2008) show that there is significant attachment ambiguity associated with iDafa, which occurs in 84.3% of the trees in our development set.', 'Figure 4 shows a constituent headed by a process nominal with an embedded adjective phrase.', 'All three models evaluated in this paper incorrectly analyze the constituent as iDafa; none of the models attach the attributive adjectives properly.', 'For parsing, the most challenging form of ambiguity occurs at the discourse level.', 'A defining characteristic of MSA is the prevalence of discourse markers to connect and subordinate words and phrases (Ryding, 2005).', 'Instead of offsetting new topics with punctuation, writers of MSA in sert connectives such as � wa and � fa to link new elements to both preceding clauses and the text as a whole.', 'As a result, Arabic sentences are usually long relative to English, especially after Length English (WSJ) Arabic (ATB) â\x89¤ 20 41.9% 33.7% â\x89¤ 40 92.4% 73.2% â\x89¤ 63 99.7% 92.6% â\x89¤ 70 99.9% 94.9% Table 2: Frequency distribution for sentence lengths in the WSJ (sections 2â\x80\x9323) and the ATB (p1â\x80\x933).', 'English parsing evaluations usually report results on sentences up to length 40.', 'Arabic sentences of up to length 63 would need to be.', 'evaluated to account for the same fraction of the data.', 'We propose a limit of 70 words for Arabic parsing evaluations.', 'ATB CTB6 Negra WSJ Trees 23449 28278 20602 43948 Word Typess 40972 45245 51272 46348 Tokens 738654 782541 355096 1046829 Tags 32 34 499 45 Phrasal Cats 22 26 325 27 Test OOV 16.8% 22.2% 30.5% 13.2% Per Sentence Table 4: Gross statistics for several different treebanks.', 'Test set OOV rate is computed using the following splits: ATB (Chiang et al., 2006); CTB6 (Huang and Harper, 2009); Negra (Dubey and Keller, 2003); English, sections 221 (train) and section 23 (test).', 'Table 3: Dev set frequencies for the two most significant discourse markers in Arabic are skewed toward analysis as a conjunction.', 'segmentation (Table 2).', 'The ATB gives several different analyses to these words to indicate different types of coordination.', 'But it conflates the coordinating and discourse separator functions of wa (<..4.b � �) into one analysis: conjunction(Table 3).', 'A better approach would be to distin guish between these cases, possibly by drawing on the vast linguistic work on Arabic connectives (AlBatal, 1990).', 'We show that noun-noun vs. discourse-level coordination ambiguity in Arabic is a significant source of parsing errors (Table 8c).', '3.1 Gross Statistics.', 'Linguistic intuitions like those in the previous section inform language-specific annotation choices.', 'The resulting structural differences between tree- banks can account for relative differences in parsing performance.', 'We compared the ATB5 to tree- banks for Chinese (CTB6), German (Negra), and English (WSJ) (Table 4).', 'The ATB is disadvantaged by having fewer trees with longer average 5 LDC A-E catalog numbers: LDC2008E61 (ATBp1v4), LDC2008E62 (ATBp2v3), and LDC2008E22 (ATBp3v3.1).', 'We map the ATB morphological analyses to the shortened â\x80\x9cBiesâ\x80\x9d tags for all experiments.', 'yields.6 But to its great advantage, it has a high ratio of non-terminals/terminals (μ Constituents / μ Length).', 'Evalb, the standard parsing metric, is biased toward such corpora (Sampson and Babarczy, 2003).', 'Also surprising is the low test set OOV rate given the possibility of morphological variation in Arabic.', 'In general, several gross corpus statistics favor the ATB, so other factors must contribute to parsing underperformance.', '3.2 Inter-annotator Agreement.', 'Annotation consistency is important in any supervised learning task.', 'In the initial release of the ATB, inter-annotator agreement was inferior to other LDC treebanks (Maamouri et al., 2008).', 'To improve agreement during the revision process, a dual-blind evaluation was performed in which 10% of the data was annotated by independent teams.', 'Maamouri et al.', '(2008) reported agreement between the teams (measured with Evalb) at 93.8% F1, the level of the CTB.', 'But Rehbein and van Genabith (2007) showed that Evalb should not be used as an indication of real differenceâ\x80\x94 or similarityâ\x80\x94between treebanks.', 'Instead, we extend the variation n-gram method of Dickinson (2005) to compare annotation error rates in the WSJ and ATB.', 'For a corpus C, let M be the set of tuples â\x88\x97n, l), where n is an n-gram with bracketing label l. If any n appears 6 Generative parsing performance is known to deteriorate with sentence length.', 'As a result, Habash et al.', '(2006) developed a technique for splitting and chunking long sentences.', 'In application settings, this may be a profitable strategy.', 'NN � .e NP NNP NP DTNNP NN � .e NP NP NNP NP Table 5: Evaluation of 100 randomly sampled variation nuclei types.', 'The samples from each corpus were independently evaluated.', 'The ATB has a much higher fraction of nuclei per tree, and a higher type-level error rate.', 'summit Sharm (a) Al-Sheikh summit Sharm (b) DTNNP Al-Sheikh in a corpus position without a bracketing label, then we also add â\x88\x97n, NIL) to M. We call the set of unique n-grams with multiple labels in M the variation nuclei of C. Bracketing variation can result from either annotation errors or linguistic ambiguity.', 'Human evaluation is one way to distinguish between the two cases.', 'Following Dickinson (2005), we randomly sampled 100 variation nuclei from each corpus and evaluated each sample for the presence of an annotation error.', 'The human evaluators were a non-native, fluent Arabic speaker (the first author) for the ATB and a native English speaker for the WSJ.7 Table 5 shows type- and token-level error rates for each corpus.', 'The 95% confidence intervals for type-level errors are (5580, 9440) for the ATB and (1400, 4610) for the WSJ.', 'The results clearly indicate increased variation in the ATB relative to the WSJ, but care should be taken in assessing the magnitude of the difference.', 'On the one hand, the type-level error rate is not calibrated for the number of n-grams in the sample.', 'At the same time, the n-gram error rate is sensitive to samples with extreme n-gram counts.', 'For example, one of the ATB samples was the determiner -"""" ; dhalikâ\x80\x9cthat.â\x80\x9d The sample occurred in 1507 corpus po sitions, and we found that the annotations were consistent.', 'If we remove this sample from the evaluation, then the ATB type-level error rises to only 37.4% while the n-gram error rate increases to 6.24%.', 'The number of ATB n-grams also falls below the WSJ sample size as the largest WSJ sample appeared in only 162 corpus positions.', '7 Unlike Dickinson (2005), we strip traces and only con-.', 'Figure 2: An ATB sample from the human evaluation.', 'The ATB annotation guidelines specify that proper nouns should be specified with a flat NP (a).', 'But the city name Sharm Al- Sheikh is also iDafa, hence the possibility for the incorrect annotation in (b).', 'We can use the preceding linguistic and annotation insights to build a manually annotated Arabic grammar in the manner of Klein and Manning (2003).', 'Manual annotation results in human in- terpretable grammars that can inform future tree- bank annotation decisions.', 'A simple lexicalized PCFG with second order Markovization gives relatively poor performance: 75.95% F1 on the test set.8 But this figure is surprisingly competitive with a recent state-of-the-art baseline (Table 7).', 'In our grammar, features are realized as annotations to basic category labels.', 'We start with noun features since written Arabic contains a very high proportion of NPs.', 'genitiveMark indicates recursive NPs with a indefinite nominal left daughter and an NP right daughter.', 'This is the form of recursive levels in iDafa constructs.', 'We also add an annotation for one-level iDafa (oneLevelIdafa) constructs since they make up more than 75% of the iDafa NPs in the ATB (Gabbard and Kulick, 2008).', 'For all other recursive NPs, we add a common annotation to the POS tag of the head (recursiveNPHead).', 'Base NPs are the other significant category of nominal phrases.', 'markBaseNP indicates these non-recursive nominal phrases.', 'This feature includes named entities, which the ATB marks with a flat NP node dominating an arbitrary number of NNP pre-terminal daughters (Figure 2).', 'For verbs we add two features.', 'First we mark any node that dominates (at any level) a verb sider POS tags when pre-terminals are the only intervening nodes between the nucleus and its bracketing (e.g., unaries, base NPs).', 'Since our objective is to compare distributions of bracketing discrepancies, we do not use heuristics to prune the set of nuclei.', '8 We use head-finding rules specified by a native speaker.', 'of Arabic.', 'This PCFG is incorporated into the Stanford Parser, a factored model that chooses a 1-best parse from the product of constituency and dependency parses.', 'termined by the category of the word that follows it.', 'Because conjunctions are elevated in the parse trees when they separate recursive constituents, we choose the right sister instead of the category of the next word.', 'We create equivalence classes for verb, noun, and adjective POS categories.', 'Table 6: Incremental dev set results for the manually annotated grammar (sentences of length â\x89¤ 70).', 'phrase (markContainsVerb).', 'This feature has a linguistic justification.', ""Historically, Arabic grammar has identified two sentences types: those that begin with a nominal (� '.i �u _.."", '), and thosethat begin with a verb (� ub..i �u _..', 'But for eign learners are often surprised by the verbless predications that are frequently used in Arabic.', 'Although these are technically nominal, they have become known as â\x80\x9cequationalâ\x80\x9d sentences.', 'mark- ContainsVerb is especially effective for distinguishing root S nodes of equational sentences.', 'We also mark all nodes that dominate an SVO configuration (containsSVO).', 'In MSA, SVO usually appears in non-matrix clauses.', 'Lexicalizing several POS tags improves performance.', 'splitIN captures the verb/preposition idioms that are widespread in Arabic.', 'Although this feature helps, we encounter one consequence of variable word order.', 'Unlike the WSJ corpus which has a high frequency of rules like VP â\x86\x92VB PP, Arabic verb phrases usually have lexi calized intervening nodes (e.g., NP subjects and direct objects).', 'For example, we might have VP â\x86\x92 VB NP PP, where the NP is the subject.', 'This annotation choice weakens splitIN.', 'We compare the manually annotated grammar, which we incorporate into the Stanford parser, to both the Berkeley (Petrov et al., 2006) and Bikel (Bikel, 2004) parsers.', 'All experiments use ATB parts 1â\x80\x933 divided according to the canonical split suggested by Chiang et al.', '(2006).', 'Preprocessing the raw trees improves parsing performance considerably.9 We first discard all trees dominated by X, which indicates errors and non-linguistic text.', 'At the phrasal level, we remove all function tags and traces.', 'We also collapse unary chains withidentical basic categories like NP â\x86\x92 NP.', 'The pre terminal morphological analyses are mapped to the shortened â\x80\x9cBiesâ\x80\x9d tags provided with the tree- bank.', 'Finally, we add â\x80\x9cDTâ\x80\x9d to the tags for definite nouns and adjectives (Kulick et al., 2006).', 'The orthographic normalization strategy we use is simple.10 In addition to removing all diacritics, we strip instances of taTweel J=J4.i, collapse variants of alif to bare alif,11 and map Ara bic punctuation characters to their Latin equivalents.', 'We retain segmentation markersâ\x80\x94which are consistent only in the vocalized section of the treebankâ\x80\x94to differentiate between e.g. � â\x80\x9ctheyâ\x80\x9d and � + â\x80\x9ctheir.â\x80\x9d Because we use the vocalized section, we must remove null pronoun markers.', 'In Table 7 we give results for several evaluation metrics.', 'Evalb is a Java re-implementation of the standard labeled precision/recall metric.12 The ATB gives all punctuation a single tag.', 'For parsing, this is a mistake, especially in the case of interrogatives.', 'splitPUNC restores the convention of the WSJ.', 'We also mark all tags that dominate a word with the feminine ending :: taa mar buuTa (markFeminine).', 'To differentiate between the coordinating and discourse separator functions of conjunctions (Table 3), we mark each CC with the label of its right sister (splitCC).', 'The intuition here is that the role of a discourse marker can usually be de 9 Both the corpus split and pre-processing code are avail-.', 'able at http://nlp.stanford.edu/projects/arabic.shtml.', '10 Other orthographic normalization schemes have been suggested for Arabic (Habash and Sadat, 2006), but we observe negligible parsing performance differences between these and the simple scheme used in this evaluation.', '11 taTweel (-) is an elongation character used in Arabic script to justify text.', 'It has no syntactic function.', 'Variants of alif are inconsistently used in Arabic texts.', 'For alif with hamza, normalization can be seen as another level of devocalization.', '12 For English, our Evalb implementation is identical to the most recent reference (EVALB20080701).', 'For Arabic we M o d e l S y s t e m L e n g t h L e a f A n c e s t o r Co rpu s Sent Exact E v a l b L P LR F1 T a g % B a s e l i n e 7 0 St an for d (v 1.', '6. 3) all G o l d P O S 7 0 0.7 91 0.825 358 0.7 73 0.818 358 0.8 02 0.836 452 80.', '37 79.', '36 79.', '86 78.', '92 77.', '72 78.', '32 81.', '07 80.', '27 80.', '67 95.', '58 95.', '49 99.', '95 B a s e li n e ( S e lf t a g ) 70 a l l B i k e l ( v 1 . 2 ) B a s e l i n e ( P r e t a g ) 7 0 a l l G o l d P O S 70 0.7 70 0.801 278 0.7 52 0.794 278 0.7 71 0.804 295 0.7 52 0.796 295 0.7 75 0.808 309 77.', '92 76.', '00 76.', '95 76.', '96 75.', '01 75.', '97 78.', '35 76.', '72 77.', '52 77.', '31 75.', '64 76.', '47 78.', '83 77.', '18 77.', '99 94.', '64 94.', '63 95.', '68 95.', '68 96.', '60 ( P e tr o v, 2 0 0 9 ) all B e r k e l e y ( S e p . 0 9 ) B a s e l i n e 7 0 a l l G o l d P O S 70 â\x80\x94 â\x80\x94 â\x80\x94 0 . 8 0 9 0.839 335 0 . 7 9', '0 . 8 3 1 0.859 496 76.', '40 75.', '30 75.', '85 82.', '32 81.', '63 81.', '97 81.', '43 80.', '73 81.', '08 84.', '37 84.', '21 84.', '29 â\x80\x94 95.', '07 95.', '02 99.', '87 Table 7: Test set results.', 'Maamouri et al.', '(2009b) evaluated the Bikel parser using the same ATB split, but only reported dev set results with gold POS tags for sentences of length â\x89¤ 40.', 'The Bikel GoldPOS configuration only supplies the gold POS tags; it does not force the parser to use them.', 'We are unaware of prior results for the Stanford parser.', 'F1 85 Berkeley 80 Stanford.', 'Bikel 75 training trees 5000 10000 15000 Figure 3: Dev set learning curves for sentence lengths â\x89¤ 70.', 'All three curves remain steep at the maximum training set size of 18818 trees.', 'The Leaf Ancestor metric measures the cost of transforming guess trees to the reference (Sampson and Babarczy, 2003).', 'It was developed in response to the non-terminal/terminal bias of Evalb, but Clegg and Shepherd (2005) showed that it is also a valuable diagnostic tool for trees with complex deep structures such as those found in the ATB.', 'For each terminal, the Leaf Ancestor metric extracts the shortest path to the root.', 'It then computes a normalized Levenshtein edit distance between the extracted chain and the reference.', 'The range of the score is between 0 and 1 (higher is better).', 'We report micro-averaged (whole corpus) and macro-averaged (per sentence) scores along add a constraint on the removal of punctuation, which has a single tag (PUNC) in the ATB.', 'Tokens tagged as PUNC are not discarded unless they consist entirely of punctuation.', 'with the number of exactly matching guess trees.', '5.1 Parsing Models.', 'The Stanford parser includes both the manually annotated grammar (§4) and an Arabic unknown word model with the following lexical features: 1.', 'Presence of the determiner J Al. 2.', 'Contains digits.', '3.', 'Ends with the feminine affix :: p. 4.', 'Various verbal (e.g., �, .::) and adjectival.', 'suffixes (e.g., �=) Other notable parameters are second order vertical Markovization and marking of unary rules.', 'Modifying the Berkeley parser for Arabic is straightforward.', 'After adding a ROOT node to all trees, we train a grammar using six split-and- merge cycles and no Markovization.', 'We use the default inference parameters.', 'Because the Bikel parser has been parameter- ized for Arabic by the LDC, we do not change the default model settings.', 'However, when we pre- tag the inputâ\x80\x94as is recommended for Englishâ\x80\x94 we notice a 0.57% F1 improvement.', 'We use the log-linear tagger of Toutanova et al.', '(2003), which gives 96.8% accuracy on the test set.', '5.2 Discussion.', 'The Berkeley parser gives state-of-the-art performance for all metrics.', 'Our baseline for all sentence lengths is 5.23% F1 higher than the best previous result.', 'The difference is due to more careful S-NOM NP NP NP VP VBG :: b NP restoring NP ADJP NN :: b NP NN NP NP ADJP DTJJ ADJP DTJJ NN :: b NP NP NP ADJP ADJP DTJJ J ..i NN :: b NP NP NP ADJP ADJP DTJJ NN _;� NP PRP DTJJ DTJJ J ..i _;� PRP J ..i NN _;� NP PRP DTJJ NN _;� NP PRP DTJJ J ..i role its constructive effective (b) Stanford (c) Berkeley (d) Bik el (a) Reference Figure 4: The constituent Restoring of its constructive and effective role parsed by the three different models (gold segmentation).', 'The ATB annotation distinguishes between verbal and nominal readings of maSdar process nominals.', 'Like verbs, maSdar takes arguments and assigns case to its objects, whereas it also demonstrates nominal characteristics by, e.g., taking determiners and heading iDafa (Fassi Fehri, 1993).', 'In the ATB, :: b astaâ\x80\x99adah is tagged 48 times as a noun and 9 times as verbal noun.', 'Consequently, all three parsers prefer the nominal reading.', 'Table 8b shows that verbal nouns are the hardest pre-terminal categories to identify.', 'None of the models attach the attributive adjectives correctly.', 'pre-processing.', 'However, the learning curves in Figure 3 show that the Berkeley parser does not exceed our manual grammar by as wide a margin as has been shown for other languages (Petrov, 2009).', 'Moreover, the Stanford parser achieves the most exact Leaf Ancestor matches and tagging accuracy that is only 0.1% below the Bikel model, which uses pre-tagged input.', 'In Figure 4 we show an example of variation between the parsing models.', 'We include a list of per-category results for selected phrasal labels, POS tags, and dependencies in Table 8.', 'The errors shown are from the Berkeley parser output, but they are representative of the other two parsing models.', '6 Joint Segmentation and Parsing.', 'Although the segmentation requirements for Arabic are not as extreme as those for Chinese, Arabic is written with certain cliticized prepositions, pronouns, and connectives connected to adjacent words.', 'Since these are distinct syntactic units, they are typically segmented.', 'The ATB segmentation scheme is one of many alternatives.', 'Until now, all evaluations of Arabic parsingâ\x80\x94including the experiments in the previous sectionâ\x80\x94have assumed gold segmentation.', 'But gold segmentation is not available in application settings, so a segmenter and parser are arranged in a pipeline.', 'Segmentation errors cascade into the parsing phase, placing an artificial limit on parsing performance.', 'Lattice parsing (Chappelier et al., 1999) is an alternative to a pipeline that prevents cascading errors by placing all segmentation options into the parse chart.', 'Recently, lattices have been used successfully in the parsing of Hebrew (Tsarfaty, 2006; Cohen and Smith, 2007), a Semitic language with similar properties to Arabic.', 'We extend the Stanford parser to accept pre-generated lattices, where each word is represented as a finite state automaton.', 'To combat the proliferation of parsing edges, we prune the lattices according to a hand-constructed lexicon of 31 clitics listed in the ATB annotation guidelines (Maamouri et al., 2009a).', 'Formally, for a lexicon L and segments I â\x88\x88 L, O â\x88\x88/ L, each word automaton accepts the language Iâ\x88\x97(O + I)Iâ\x88\x97.', 'Aside from adding a simple rule to correct alif deletion caused by the preposition J, no other language-specific processing is performed.', 'Our evaluation includes both weighted and un- weighted lattices.', 'We weight edges using a unigram language model estimated with Good- Turing smoothing.', 'Despite their simplicity, uni- gram weights have been shown as an effective feature in segmentation models (Dyer, 2009).13 The joint parser/segmenter is compared to a pipeline that uses MADA (v3.0), a state-of-the-art Arabic segmenter, configured to replicate ATB segmentation (Habash and Rambow, 2005).', 'MADA uses an ensemble of SVMs to first re-rank the output of a deterministic morphological analyzer.', 'For each 13 Of course, this weighting makes the PCFG an improper distribution.', 'However, in practice, unknown word models also make the distribution improper.', 'Parent Head Modif er Dir # gold F1 Label # gold F1 NP NP TAG R 946 0.54 ADJP 1216 59.45 S S S R 708 0.57 SBAR 2918 69.81 NP NP ADJ P R 803 0.64 FRAG 254 72.87 NP NP N P R 2907 0.66 VP 5507 78.83 NP NP SBA R R 1035 0.67 S 6579 78.91 NP NP P P R 2713 0.67 PP 7516 80.93 VP TAG P P R 3230 0.80 NP 34025 84.95 NP NP TAG L 805 0.85 ADVP 1093 90.64 VP TAG SBA R R 772 0.86 WHN P 787 96.00 S VP N P L 961 0.87 (a) Major phrasal categories (b) Major POS categories (c) Ten lowest scoring (Collins, 2003)-style dependencies occurring more than 700 times Table 8: Per category performance of the Berkeley parser on sentence lengths â\x89¤ 70 (dev set, gold segmentation).', '(a) Of the high frequency phrasal categories, ADJP and SBAR are the hardest to parse.', 'We showed in §2 that lexical ambiguity explains the underperformance of these categories.', '(b) POS tagging accuracy is lowest for maSdar verbal nouns (VBG,VN) and adjectives (e.g., JJ).', 'Richer tag sets have been suggested for modeling morphologically complex distinctions (Diab, 2007), but we find that linguistically rich tag sets do not help parsing.', '(c) Coordination ambiguity is shown in dependency scores by e.g., â\x88\x97SSS R) and â\x88\x97NP NP NP R).', 'â\x88\x97NP NP PP R) and â\x88\x97NP NP ADJP R) are both iDafa attachment.', 'input token, the segmentation is then performed deterministically given the 1-best analysis.', 'Since guess and gold trees may now have different yields, the question of evaluation is complex.', 'Cohen and Smith (2007) chose a metric like SParseval (Roark et al., 2006) that first aligns the trees and then penalizes segmentation errors with an edit-distance metric.', 'But we follow the more direct adaptation of Evalb suggested by Tsarfaty (2006), who viewed exact segmentation as the ultimate goal.', 'Therefore, we only score guess/gold pairs with identical character yields, a condition that allows us to measure parsing, tagging, and segmentation accuracy by ignoring whitespace.', 'Table 9 shows that MADA produces a high quality segmentation, and that the effect of cascading segmentation errors on parsing is only 1.92% F1.', 'However, MADA is language-specific and relies on manually constructed dictionaries.', 'Conversely, the lattice parser requires no linguistic resources and produces segmentations of comparable quality.', 'Nonetheless, parse quality is much lower in the joint model because a lattice is effectively a long sentence.', 'A cell in the bottom row of the parse chart is required for each potential whitespace boundary.', 'As we have said, parse quality decreases with sentence length.', 'Finally, we note that simple weighting gives nearly a 2% F1 improvement, whereas Goldberg and Tsarfaty (2008) found that unweighted lattices were more effective for Hebrew.', 'Table 9: Dev set results for sentences of length â\x89¤ 70.', 'Coverage indicates the fraction of hypotheses in which the character yield exactly matched the reference.', 'Each model was able to produce hypotheses for all input sentences.', 'In these experiments, the input lacks segmentation markers, hence the slightly different dev set baseline than in Table 6.', 'By establishing significantly higher parsing baselines, we have shown that Arabic parsing performance is not as poor as previously thought, but remains much lower than English.', 'We have described grammar state splits that significantly improve parsing performance, catalogued parsing errors, and quantified the effect of segmentation errors.', 'With a human evaluation we also showed that ATB inter-annotator agreement remains low relative to the WSJ corpus.', 'Our results suggest that current parsing models would benefit from better annotation consistency and enriched annotation in certain syntactic configurations.', 'Acknowledgments We thank Steven Bethard, Evan Rosen, and Karen Shiells for material contributions to this work.', 'We are also grateful to Markus Dickinson, Ali Farghaly, Nizar Habash, Seth Kulick, David McCloskey, Claude Reichard, Ryan Roth, and Reut Tsarfaty for constructive discussions.', 'The first author is supported by a National Defense Science and Engineering Graduate (NDSEG) fellowship.', 'This paper is based on work supported in part by DARPA through IBM.', 'The content does not necessarily reflect the views of the U.S. Government, and no official endorsement should be inferred.']",abstractive C00-2123,C00-2123,5,35,"An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.",Global features are extracted from other occurrences of the same token in the whole document.,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive I05-5011,I05-5011,2,203,The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.,"A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive D10-1083,D10-1083,2,16,"However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.","A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive I05-5011,I05-5011,5,12,"This topic has been getting more attention, driven by the needs of various NLP applications.",Global features are extracted from other occurrences of the same token in the whole document.,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W06-3114_sweta,W06-3114,5,174,The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.,"These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive P05-1013_vardha,P05-1013,1,0,This paper talks about Pseudo-Projective Dependency Parsing.,This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W06-3114_swastika,W06-3114,3,172,"Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.","We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W99-0613_vardha,W99-0613,1,0,This paper talks about Unsupervised Models for Named Entity Classification.,This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive D10-1083,D10-1083,10,241,"This assumption, however, is not inherent to type-based tagging models.","To our knowledge, ours is the first analysis of this kind for Arabic parsing.","['Better Arabic Parsing: Baselines, Evaluations, and Analysis', 'In this paper, we offer broad insight into the underperformance of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.', 'First, we identify sources of syntactic ambiguity understudied in the existing parsing literature.', 'Second, we show that although the Penn Arabic Treebank is similar to other tree- banks in gross statistical terms, annotation consistency remains problematic.', 'Third, we develop a human interpretable grammar that is competitive with a latent variable PCFG.', 'Fourth, we show how to build better models for three different parsers.', 'Finally, we show that in application settings, the absence of gold segmentation lowers parsing performance by 2â\x80\x935% F1.', 'It is well-known that constituency parsing models designed for English often do not generalize easily to other languages and treebanks.1 Explanations for this phenomenon have included the relative informativeness of lexicalization (Dubey and Keller, 2003; Arun and Keller, 2005), insensitivity to morphology (Cowan and Collins, 2005; Tsarfaty and Simaâ\x80\x99an, 2008), and the effect of variable word order (Collins et al., 1999).', 'Certainly these linguistic factors increase the difficulty of syntactic disambiguation.', 'Less frequently studied is the interplay among language, annotation choices, and parsing model design (Levy and Manning, 2003; Ku¨ bler, 2005).', '1 The apparent difficulty of adapting constituency models to non-configurational languages has been one motivation for dependency representations (HajicË\x87 and Zema´nek, 2004; Habash and Roth, 2009).', 'To investigate the influence of these factors, we analyze Modern Standard Arabic (henceforth MSA, or simply â\x80\x9cArabicâ\x80\x9d) because of the unusual opportunity it presents for comparison to English parsing results.', 'The Penn Arabic Treebank (ATB) syntactic guidelines (Maamouri et al., 2004) were purposefully borrowed without major modification from English (Marcus et al., 1993).', 'Further, Maamouri and Bies (2004) argued that the English guidelines generalize well to other languages.', 'But Arabic contains a variety of linguistic phenomena unseen in English.', 'Crucially, the conventional orthographic form of MSA text is unvocalized, a property that results in a deficient graphical representation.', 'For humans, this characteristic can impede the acquisition of literacy.', 'How do additional ambiguities caused by devocalization affect statistical learning?', 'How should the absence of vowels and syntactic markers influence annotation choices and grammar development?', 'Motivated by these questions, we significantly raise baselines for three existing parsing models through better grammar engineering.', 'Our analysis begins with a description of syntactic ambiguity in unvocalized MSA text (§2).', 'Next we show that the ATB is similar to other tree- banks in gross statistical terms, but that annotation consistency remains low relative to English (§3).', 'We then use linguistic and annotation insights to develop a manually annotated grammar for Arabic (§4).', 'To facilitate comparison with previous work, we exhaustively evaluate this grammar and two other parsing models when gold segmentation is assumed (§5).', 'Finally, we provide a realistic eval uation in which segmentation is performed both in a pipeline and jointly with parsing (§6).', 'We quantify error categories in both evaluation settings.', 'To our knowledge, ours is the first analysis of this kind for Arabic parsing.', 'Arabic is a morphologically rich language with a root-and-pattern system similar to other Semitic languages.', 'The basic word order is VSO, but SVO, VOS, and VO configurations are also possible.2 Nouns and verbs are created by selecting a consonantal root (usually triliteral or quadriliteral), which bears the semantic core, and adding affixes and diacritics.', 'Particles are uninflected.', ""Word Head Of Complement POS 1 '01 inna â\x80\x9cIndeed, trulyâ\x80\x9d VP Noun VBP 2 '01 anna â\x80\x9cThatâ\x80\x9d SBAR Noun IN 3 01 in â\x80\x9cIfâ\x80\x9d SBAR Verb IN 4 01 an â\x80\x9ctoâ\x80\x9d SBAR Verb IN Table 1: Diacritized particles and pseudo-verbs that, after orthographic normalization, have the equivalent surface form 0 an."", 'The distinctions in the ATB are linguistically justified, but complicate parsing.', 'Table 8a shows that the best model recovers SBAR at only 71.0% F1.', 'Diacritics can also be used to specify grammatical relations such as case and gender.', 'But diacritics are not present in unvocalized text, which is the standard form of, e.g., news media documents.3 VBD she added VP PUNC S VP VBP NP ...', 'VBD she added VP PUNC â\x80\x9c SBAR IN NP 0 NN.', 'Let us consider an example of ambiguity caused by devocalization.', 'Table 1 shows four words â\x80\x9c 0 Indeed NN Indeed Saddamwhose unvocalized surface forms 0 an are indistinguishable.', 'Whereas Arabic linguistic theory as Saddam (a) Reference (b) Stanford signs (1) and (2) to the class of pseudo verbs 01 +i J>1� inna and her sisters since they can beinflected, the ATB conventions treat (2) as a com plementizer, which means that it must be the head of SBAR.', 'Because these two words have identical complements, syntax rules are typically unhelpful for distinguishing between them.', 'This is especially true in the case of quotationsâ\x80\x94which are common in the ATBâ\x80\x94where (1) will follow a verb like (2) (Figure 1).', 'Even with vocalization, there are linguistic categories that are difficult to identify without semantic clues.', 'Two common cases are the attribu tive adjective and the process nominal _; maSdar, which can have a verbal reading.4 At tributive adjectives are hard because they are or- thographically identical to nominals; they are inflected for gender, number, case, and definiteness.', 'Moreover, they are used as substantives much 2 Unlike machine translation, constituency parsing is not significantly affected by variable word order.', 'However, when grammatical relations like subject and object are evaluated, parsing performance drops considerably (Green et al., 2009).', 'In particular, the decision to represent arguments in verb- initial clauses as VP internal makes VSO and VOS configurations difficult to distinguish.', 'Topicalization of NP subjects in SVO configurations causes confusion with VO (pro-drop).', '3 Techniques for automatic vocalization have been studied (Zitouni et al., 2006; Habash and Rambow, 2007).', 'However, the data sparsity induced by vocalization makes it difficult to train statistical models on corpora of the size of the ATB, so vocalizing and then parsing may well not help performance.', ""4 Traditional Arabic linguistic theory treats both of these types as subcategories of noun � '.i . Figure 1: The Stanford parser (Klein and Manning, 2002) is unable to recover the verbal reading of the unvocalized surface form 0 an (Table 1)."", 'more frequently than is done in English.', 'Process nominals name the action of the transitive or ditransitive verb from which they derive.', 'The verbal reading arises when the maSdar has an NP argument which, in vocalized text, is marked in the accusative case.', 'When the maSdar lacks a determiner, the constituent as a whole resem bles the ubiquitous annexation construct � ?f iDafa.', 'Gabbard and Kulick (2008) show that there is significant attachment ambiguity associated with iDafa, which occurs in 84.3% of the trees in our development set.', 'Figure 4 shows a constituent headed by a process nominal with an embedded adjective phrase.', 'All three models evaluated in this paper incorrectly analyze the constituent as iDafa; none of the models attach the attributive adjectives properly.', 'For parsing, the most challenging form of ambiguity occurs at the discourse level.', 'A defining characteristic of MSA is the prevalence of discourse markers to connect and subordinate words and phrases (Ryding, 2005).', 'Instead of offsetting new topics with punctuation, writers of MSA in sert connectives such as � wa and � fa to link new elements to both preceding clauses and the text as a whole.', 'As a result, Arabic sentences are usually long relative to English, especially after Length English (WSJ) Arabic (ATB) â\x89¤ 20 41.9% 33.7% â\x89¤ 40 92.4% 73.2% â\x89¤ 63 99.7% 92.6% â\x89¤ 70 99.9% 94.9% Table 2: Frequency distribution for sentence lengths in the WSJ (sections 2â\x80\x9323) and the ATB (p1â\x80\x933).', 'English parsing evaluations usually report results on sentences up to length 40.', 'Arabic sentences of up to length 63 would need to be.', 'evaluated to account for the same fraction of the data.', 'We propose a limit of 70 words for Arabic parsing evaluations.', 'ATB CTB6 Negra WSJ Trees 23449 28278 20602 43948 Word Typess 40972 45245 51272 46348 Tokens 738654 782541 355096 1046829 Tags 32 34 499 45 Phrasal Cats 22 26 325 27 Test OOV 16.8% 22.2% 30.5% 13.2% Per Sentence Table 4: Gross statistics for several different treebanks.', 'Test set OOV rate is computed using the following splits: ATB (Chiang et al., 2006); CTB6 (Huang and Harper, 2009); Negra (Dubey and Keller, 2003); English, sections 221 (train) and section 23 (test).', 'Table 3: Dev set frequencies for the two most significant discourse markers in Arabic are skewed toward analysis as a conjunction.', 'segmentation (Table 2).', 'The ATB gives several different analyses to these words to indicate different types of coordination.', 'But it conflates the coordinating and discourse separator functions of wa (<..4.b � �) into one analysis: conjunction(Table 3).', 'A better approach would be to distin guish between these cases, possibly by drawing on the vast linguistic work on Arabic connectives (AlBatal, 1990).', 'We show that noun-noun vs. discourse-level coordination ambiguity in Arabic is a significant source of parsing errors (Table 8c).', '3.1 Gross Statistics.', 'Linguistic intuitions like those in the previous section inform language-specific annotation choices.', 'The resulting structural differences between tree- banks can account for relative differences in parsing performance.', 'We compared the ATB5 to tree- banks for Chinese (CTB6), German (Negra), and English (WSJ) (Table 4).', 'The ATB is disadvantaged by having fewer trees with longer average 5 LDC A-E catalog numbers: LDC2008E61 (ATBp1v4), LDC2008E62 (ATBp2v3), and LDC2008E22 (ATBp3v3.1).', 'We map the ATB morphological analyses to the shortened â\x80\x9cBiesâ\x80\x9d tags for all experiments.', 'yields.6 But to its great advantage, it has a high ratio of non-terminals/terminals (μ Constituents / μ Length).', 'Evalb, the standard parsing metric, is biased toward such corpora (Sampson and Babarczy, 2003).', 'Also surprising is the low test set OOV rate given the possibility of morphological variation in Arabic.', 'In general, several gross corpus statistics favor the ATB, so other factors must contribute to parsing underperformance.', '3.2 Inter-annotator Agreement.', 'Annotation consistency is important in any supervised learning task.', 'In the initial release of the ATB, inter-annotator agreement was inferior to other LDC treebanks (Maamouri et al., 2008).', 'To improve agreement during the revision process, a dual-blind evaluation was performed in which 10% of the data was annotated by independent teams.', 'Maamouri et al.', '(2008) reported agreement between the teams (measured with Evalb) at 93.8% F1, the level of the CTB.', 'But Rehbein and van Genabith (2007) showed that Evalb should not be used as an indication of real differenceâ\x80\x94 or similarityâ\x80\x94between treebanks.', 'Instead, we extend the variation n-gram method of Dickinson (2005) to compare annotation error rates in the WSJ and ATB.', 'For a corpus C, let M be the set of tuples â\x88\x97n, l), where n is an n-gram with bracketing label l. If any n appears 6 Generative parsing performance is known to deteriorate with sentence length.', 'As a result, Habash et al.', '(2006) developed a technique for splitting and chunking long sentences.', 'In application settings, this may be a profitable strategy.', 'NN � .e NP NNP NP DTNNP NN � .e NP NP NNP NP Table 5: Evaluation of 100 randomly sampled variation nuclei types.', 'The samples from each corpus were independently evaluated.', 'The ATB has a much higher fraction of nuclei per tree, and a higher type-level error rate.', 'summit Sharm (a) Al-Sheikh summit Sharm (b) DTNNP Al-Sheikh in a corpus position without a bracketing label, then we also add â\x88\x97n, NIL) to M. We call the set of unique n-grams with multiple labels in M the variation nuclei of C. Bracketing variation can result from either annotation errors or linguistic ambiguity.', 'Human evaluation is one way to distinguish between the two cases.', 'Following Dickinson (2005), we randomly sampled 100 variation nuclei from each corpus and evaluated each sample for the presence of an annotation error.', 'The human evaluators were a non-native, fluent Arabic speaker (the first author) for the ATB and a native English speaker for the WSJ.7 Table 5 shows type- and token-level error rates for each corpus.', 'The 95% confidence intervals for type-level errors are (5580, 9440) for the ATB and (1400, 4610) for the WSJ.', 'The results clearly indicate increased variation in the ATB relative to the WSJ, but care should be taken in assessing the magnitude of the difference.', 'On the one hand, the type-level error rate is not calibrated for the number of n-grams in the sample.', 'At the same time, the n-gram error rate is sensitive to samples with extreme n-gram counts.', 'For example, one of the ATB samples was the determiner -"""" ; dhalikâ\x80\x9cthat.â\x80\x9d The sample occurred in 1507 corpus po sitions, and we found that the annotations were consistent.', 'If we remove this sample from the evaluation, then the ATB type-level error rises to only 37.4% while the n-gram error rate increases to 6.24%.', 'The number of ATB n-grams also falls below the WSJ sample size as the largest WSJ sample appeared in only 162 corpus positions.', '7 Unlike Dickinson (2005), we strip traces and only con-.', 'Figure 2: An ATB sample from the human evaluation.', 'The ATB annotation guidelines specify that proper nouns should be specified with a flat NP (a).', 'But the city name Sharm Al- Sheikh is also iDafa, hence the possibility for the incorrect annotation in (b).', 'We can use the preceding linguistic and annotation insights to build a manually annotated Arabic grammar in the manner of Klein and Manning (2003).', 'Manual annotation results in human in- terpretable grammars that can inform future tree- bank annotation decisions.', 'A simple lexicalized PCFG with second order Markovization gives relatively poor performance: 75.95% F1 on the test set.8 But this figure is surprisingly competitive with a recent state-of-the-art baseline (Table 7).', 'In our grammar, features are realized as annotations to basic category labels.', 'We start with noun features since written Arabic contains a very high proportion of NPs.', 'genitiveMark indicates recursive NPs with a indefinite nominal left daughter and an NP right daughter.', 'This is the form of recursive levels in iDafa constructs.', 'We also add an annotation for one-level iDafa (oneLevelIdafa) constructs since they make up more than 75% of the iDafa NPs in the ATB (Gabbard and Kulick, 2008).', 'For all other recursive NPs, we add a common annotation to the POS tag of the head (recursiveNPHead).', 'Base NPs are the other significant category of nominal phrases.', 'markBaseNP indicates these non-recursive nominal phrases.', 'This feature includes named entities, which the ATB marks with a flat NP node dominating an arbitrary number of NNP pre-terminal daughters (Figure 2).', 'For verbs we add two features.', 'First we mark any node that dominates (at any level) a verb sider POS tags when pre-terminals are the only intervening nodes between the nucleus and its bracketing (e.g., unaries, base NPs).', 'Since our objective is to compare distributions of bracketing discrepancies, we do not use heuristics to prune the set of nuclei.', '8 We use head-finding rules specified by a native speaker.', 'of Arabic.', 'This PCFG is incorporated into the Stanford Parser, a factored model that chooses a 1-best parse from the product of constituency and dependency parses.', 'termined by the category of the word that follows it.', 'Because conjunctions are elevated in the parse trees when they separate recursive constituents, we choose the right sister instead of the category of the next word.', 'We create equivalence classes for verb, noun, and adjective POS categories.', 'Table 6: Incremental dev set results for the manually annotated grammar (sentences of length â\x89¤ 70).', 'phrase (markContainsVerb).', 'This feature has a linguistic justification.', ""Historically, Arabic grammar has identified two sentences types: those that begin with a nominal (� '.i �u _.."", '), and thosethat begin with a verb (� ub..i �u _..', 'But for eign learners are often surprised by the verbless predications that are frequently used in Arabic.', 'Although these are technically nominal, they have become known as â\x80\x9cequationalâ\x80\x9d sentences.', 'mark- ContainsVerb is especially effective for distinguishing root S nodes of equational sentences.', 'We also mark all nodes that dominate an SVO configuration (containsSVO).', 'In MSA, SVO usually appears in non-matrix clauses.', 'Lexicalizing several POS tags improves performance.', 'splitIN captures the verb/preposition idioms that are widespread in Arabic.', 'Although this feature helps, we encounter one consequence of variable word order.', 'Unlike the WSJ corpus which has a high frequency of rules like VP â\x86\x92VB PP, Arabic verb phrases usually have lexi calized intervening nodes (e.g., NP subjects and direct objects).', 'For example, we might have VP â\x86\x92 VB NP PP, where the NP is the subject.', 'This annotation choice weakens splitIN.', 'We compare the manually annotated grammar, which we incorporate into the Stanford parser, to both the Berkeley (Petrov et al., 2006) and Bikel (Bikel, 2004) parsers.', 'All experiments use ATB parts 1â\x80\x933 divided according to the canonical split suggested by Chiang et al.', '(2006).', 'Preprocessing the raw trees improves parsing performance considerably.9 We first discard all trees dominated by X, which indicates errors and non-linguistic text.', 'At the phrasal level, we remove all function tags and traces.', 'We also collapse unary chains withidentical basic categories like NP â\x86\x92 NP.', 'The pre terminal morphological analyses are mapped to the shortened â\x80\x9cBiesâ\x80\x9d tags provided with the tree- bank.', 'Finally, we add â\x80\x9cDTâ\x80\x9d to the tags for definite nouns and adjectives (Kulick et al., 2006).', 'The orthographic normalization strategy we use is simple.10 In addition to removing all diacritics, we strip instances of taTweel J=J4.i, collapse variants of alif to bare alif,11 and map Ara bic punctuation characters to their Latin equivalents.', 'We retain segmentation markersâ\x80\x94which are consistent only in the vocalized section of the treebankâ\x80\x94to differentiate between e.g. � â\x80\x9ctheyâ\x80\x9d and � + â\x80\x9ctheir.â\x80\x9d Because we use the vocalized section, we must remove null pronoun markers.', 'In Table 7 we give results for several evaluation metrics.', 'Evalb is a Java re-implementation of the standard labeled precision/recall metric.12 The ATB gives all punctuation a single tag.', 'For parsing, this is a mistake, especially in the case of interrogatives.', 'splitPUNC restores the convention of the WSJ.', 'We also mark all tags that dominate a word with the feminine ending :: taa mar buuTa (markFeminine).', 'To differentiate between the coordinating and discourse separator functions of conjunctions (Table 3), we mark each CC with the label of its right sister (splitCC).', 'The intuition here is that the role of a discourse marker can usually be de 9 Both the corpus split and pre-processing code are avail-.', 'able at http://nlp.stanford.edu/projects/arabic.shtml.', '10 Other orthographic normalization schemes have been suggested for Arabic (Habash and Sadat, 2006), but we observe negligible parsing performance differences between these and the simple scheme used in this evaluation.', '11 taTweel (-) is an elongation character used in Arabic script to justify text.', 'It has no syntactic function.', 'Variants of alif are inconsistently used in Arabic texts.', 'For alif with hamza, normalization can be seen as another level of devocalization.', '12 For English, our Evalb implementation is identical to the most recent reference (EVALB20080701).', 'For Arabic we M o d e l S y s t e m L e n g t h L e a f A n c e s t o r Co rpu s Sent Exact E v a l b L P LR F1 T a g % B a s e l i n e 7 0 St an for d (v 1.', '6. 3) all G o l d P O S 7 0 0.7 91 0.825 358 0.7 73 0.818 358 0.8 02 0.836 452 80.', '37 79.', '36 79.', '86 78.', '92 77.', '72 78.', '32 81.', '07 80.', '27 80.', '67 95.', '58 95.', '49 99.', '95 B a s e li n e ( S e lf t a g ) 70 a l l B i k e l ( v 1 . 2 ) B a s e l i n e ( P r e t a g ) 7 0 a l l G o l d P O S 70 0.7 70 0.801 278 0.7 52 0.794 278 0.7 71 0.804 295 0.7 52 0.796 295 0.7 75 0.808 309 77.', '92 76.', '00 76.', '95 76.', '96 75.', '01 75.', '97 78.', '35 76.', '72 77.', '52 77.', '31 75.', '64 76.', '47 78.', '83 77.', '18 77.', '99 94.', '64 94.', '63 95.', '68 95.', '68 96.', '60 ( P e tr o v, 2 0 0 9 ) all B e r k e l e y ( S e p . 0 9 ) B a s e l i n e 7 0 a l l G o l d P O S 70 â\x80\x94 â\x80\x94 â\x80\x94 0 . 8 0 9 0.839 335 0 . 7 9', '0 . 8 3 1 0.859 496 76.', '40 75.', '30 75.', '85 82.', '32 81.', '63 81.', '97 81.', '43 80.', '73 81.', '08 84.', '37 84.', '21 84.', '29 â\x80\x94 95.', '07 95.', '02 99.', '87 Table 7: Test set results.', 'Maamouri et al.', '(2009b) evaluated the Bikel parser using the same ATB split, but only reported dev set results with gold POS tags for sentences of length â\x89¤ 40.', 'The Bikel GoldPOS configuration only supplies the gold POS tags; it does not force the parser to use them.', 'We are unaware of prior results for the Stanford parser.', 'F1 85 Berkeley 80 Stanford.', 'Bikel 75 training trees 5000 10000 15000 Figure 3: Dev set learning curves for sentence lengths â\x89¤ 70.', 'All three curves remain steep at the maximum training set size of 18818 trees.', 'The Leaf Ancestor metric measures the cost of transforming guess trees to the reference (Sampson and Babarczy, 2003).', 'It was developed in response to the non-terminal/terminal bias of Evalb, but Clegg and Shepherd (2005) showed that it is also a valuable diagnostic tool for trees with complex deep structures such as those found in the ATB.', 'For each terminal, the Leaf Ancestor metric extracts the shortest path to the root.', 'It then computes a normalized Levenshtein edit distance between the extracted chain and the reference.', 'The range of the score is between 0 and 1 (higher is better).', 'We report micro-averaged (whole corpus) and macro-averaged (per sentence) scores along add a constraint on the removal of punctuation, which has a single tag (PUNC) in the ATB.', 'Tokens tagged as PUNC are not discarded unless they consist entirely of punctuation.', 'with the number of exactly matching guess trees.', '5.1 Parsing Models.', 'The Stanford parser includes both the manually annotated grammar (§4) and an Arabic unknown word model with the following lexical features: 1.', 'Presence of the determiner J Al. 2.', 'Contains digits.', '3.', 'Ends with the feminine affix :: p. 4.', 'Various verbal (e.g., �, .::) and adjectival.', 'suffixes (e.g., �=) Other notable parameters are second order vertical Markovization and marking of unary rules.', 'Modifying the Berkeley parser for Arabic is straightforward.', 'After adding a ROOT node to all trees, we train a grammar using six split-and- merge cycles and no Markovization.', 'We use the default inference parameters.', 'Because the Bikel parser has been parameter- ized for Arabic by the LDC, we do not change the default model settings.', 'However, when we pre- tag the inputâ\x80\x94as is recommended for Englishâ\x80\x94 we notice a 0.57% F1 improvement.', 'We use the log-linear tagger of Toutanova et al.', '(2003), which gives 96.8% accuracy on the test set.', '5.2 Discussion.', 'The Berkeley parser gives state-of-the-art performance for all metrics.', 'Our baseline for all sentence lengths is 5.23% F1 higher than the best previous result.', 'The difference is due to more careful S-NOM NP NP NP VP VBG :: b NP restoring NP ADJP NN :: b NP NN NP NP ADJP DTJJ ADJP DTJJ NN :: b NP NP NP ADJP ADJP DTJJ J ..i NN :: b NP NP NP ADJP ADJP DTJJ NN _;� NP PRP DTJJ DTJJ J ..i _;� PRP J ..i NN _;� NP PRP DTJJ NN _;� NP PRP DTJJ J ..i role its constructive effective (b) Stanford (c) Berkeley (d) Bik el (a) Reference Figure 4: The constituent Restoring of its constructive and effective role parsed by the three different models (gold segmentation).', 'The ATB annotation distinguishes between verbal and nominal readings of maSdar process nominals.', 'Like verbs, maSdar takes arguments and assigns case to its objects, whereas it also demonstrates nominal characteristics by, e.g., taking determiners and heading iDafa (Fassi Fehri, 1993).', 'In the ATB, :: b astaâ\x80\x99adah is tagged 48 times as a noun and 9 times as verbal noun.', 'Consequently, all three parsers prefer the nominal reading.', 'Table 8b shows that verbal nouns are the hardest pre-terminal categories to identify.', 'None of the models attach the attributive adjectives correctly.', 'pre-processing.', 'However, the learning curves in Figure 3 show that the Berkeley parser does not exceed our manual grammar by as wide a margin as has been shown for other languages (Petrov, 2009).', 'Moreover, the Stanford parser achieves the most exact Leaf Ancestor matches and tagging accuracy that is only 0.1% below the Bikel model, which uses pre-tagged input.', 'In Figure 4 we show an example of variation between the parsing models.', 'We include a list of per-category results for selected phrasal labels, POS tags, and dependencies in Table 8.', 'The errors shown are from the Berkeley parser output, but they are representative of the other two parsing models.', '6 Joint Segmentation and Parsing.', 'Although the segmentation requirements for Arabic are not as extreme as those for Chinese, Arabic is written with certain cliticized prepositions, pronouns, and connectives connected to adjacent words.', 'Since these are distinct syntactic units, they are typically segmented.', 'The ATB segmentation scheme is one of many alternatives.', 'Until now, all evaluations of Arabic parsingâ\x80\x94including the experiments in the previous sectionâ\x80\x94have assumed gold segmentation.', 'But gold segmentation is not available in application settings, so a segmenter and parser are arranged in a pipeline.', 'Segmentation errors cascade into the parsing phase, placing an artificial limit on parsing performance.', 'Lattice parsing (Chappelier et al., 1999) is an alternative to a pipeline that prevents cascading errors by placing all segmentation options into the parse chart.', 'Recently, lattices have been used successfully in the parsing of Hebrew (Tsarfaty, 2006; Cohen and Smith, 2007), a Semitic language with similar properties to Arabic.', 'We extend the Stanford parser to accept pre-generated lattices, where each word is represented as a finite state automaton.', 'To combat the proliferation of parsing edges, we prune the lattices according to a hand-constructed lexicon of 31 clitics listed in the ATB annotation guidelines (Maamouri et al., 2009a).', 'Formally, for a lexicon L and segments I â\x88\x88 L, O â\x88\x88/ L, each word automaton accepts the language Iâ\x88\x97(O + I)Iâ\x88\x97.', 'Aside from adding a simple rule to correct alif deletion caused by the preposition J, no other language-specific processing is performed.', 'Our evaluation includes both weighted and un- weighted lattices.', 'We weight edges using a unigram language model estimated with Good- Turing smoothing.', 'Despite their simplicity, uni- gram weights have been shown as an effective feature in segmentation models (Dyer, 2009).13 The joint parser/segmenter is compared to a pipeline that uses MADA (v3.0), a state-of-the-art Arabic segmenter, configured to replicate ATB segmentation (Habash and Rambow, 2005).', 'MADA uses an ensemble of SVMs to first re-rank the output of a deterministic morphological analyzer.', 'For each 13 Of course, this weighting makes the PCFG an improper distribution.', 'However, in practice, unknown word models also make the distribution improper.', 'Parent Head Modif er Dir # gold F1 Label # gold F1 NP NP TAG R 946 0.54 ADJP 1216 59.45 S S S R 708 0.57 SBAR 2918 69.81 NP NP ADJ P R 803 0.64 FRAG 254 72.87 NP NP N P R 2907 0.66 VP 5507 78.83 NP NP SBA R R 1035 0.67 S 6579 78.91 NP NP P P R 2713 0.67 PP 7516 80.93 VP TAG P P R 3230 0.80 NP 34025 84.95 NP NP TAG L 805 0.85 ADVP 1093 90.64 VP TAG SBA R R 772 0.86 WHN P 787 96.00 S VP N P L 961 0.87 (a) Major phrasal categories (b) Major POS categories (c) Ten lowest scoring (Collins, 2003)-style dependencies occurring more than 700 times Table 8: Per category performance of the Berkeley parser on sentence lengths â\x89¤ 70 (dev set, gold segmentation).', '(a) Of the high frequency phrasal categories, ADJP and SBAR are the hardest to parse.', 'We showed in §2 that lexical ambiguity explains the underperformance of these categories.', '(b) POS tagging accuracy is lowest for maSdar verbal nouns (VBG,VN) and adjectives (e.g., JJ).', 'Richer tag sets have been suggested for modeling morphologically complex distinctions (Diab, 2007), but we find that linguistically rich tag sets do not help parsing.', '(c) Coordination ambiguity is shown in dependency scores by e.g., â\x88\x97SSS R) and â\x88\x97NP NP NP R).', 'â\x88\x97NP NP PP R) and â\x88\x97NP NP ADJP R) are both iDafa attachment.', 'input token, the segmentation is then performed deterministically given the 1-best analysis.', 'Since guess and gold trees may now have different yields, the question of evaluation is complex.', 'Cohen and Smith (2007) chose a metric like SParseval (Roark et al., 2006) that first aligns the trees and then penalizes segmentation errors with an edit-distance metric.', 'But we follow the more direct adaptation of Evalb suggested by Tsarfaty (2006), who viewed exact segmentation as the ultimate goal.', 'Therefore, we only score guess/gold pairs with identical character yields, a condition that allows us to measure parsing, tagging, and segmentation accuracy by ignoring whitespace.', 'Table 9 shows that MADA produces a high quality segmentation, and that the effect of cascading segmentation errors on parsing is only 1.92% F1.', 'However, MADA is language-specific and relies on manually constructed dictionaries.', 'Conversely, the lattice parser requires no linguistic resources and produces segmentations of comparable quality.', 'Nonetheless, parse quality is much lower in the joint model because a lattice is effectively a long sentence.', 'A cell in the bottom row of the parse chart is required for each potential whitespace boundary.', 'As we have said, parse quality decreases with sentence length.', 'Finally, we note that simple weighting gives nearly a 2% F1 improvement, whereas Goldberg and Tsarfaty (2008) found that unweighted lattices were more effective for Hebrew.', 'Table 9: Dev set results for sentences of length â\x89¤ 70.', 'Coverage indicates the fraction of hypotheses in which the character yield exactly matched the reference.', 'Each model was able to produce hypotheses for all input sentences.', 'In these experiments, the input lacks segmentation markers, hence the slightly different dev set baseline than in Table 6.', 'By establishing significantly higher parsing baselines, we have shown that Arabic parsing performance is not as poor as previously thought, but remains much lower than English.', 'We have described grammar state splits that significantly improve parsing performance, catalogued parsing errors, and quantified the effect of segmentation errors.', 'With a human evaluation we also showed that ATB inter-annotator agreement remains low relative to the WSJ corpus.', 'Our results suggest that current parsing models would benefit from better annotation consistency and enriched annotation in certain syntactic configurations.', 'Acknowledgments We thank Steven Bethard, Evan Rosen, and Karen Shiells for material contributions to this work.', 'We are also grateful to Markus Dickinson, Ali Farghaly, Nizar Habash, Seth Kulick, David McCloskey, Claude Reichard, Ryan Roth, and Reut Tsarfaty for constructive discussions.', 'The first author is supported by a National Defense Science and Engineering Graduate (NDSEG) fellowship.', 'This paper is based on work supported in part by DARPA through IBM.', 'The content does not necessarily reflect the views of the U.S. Government, and no official endorsement should be inferred.']",extractive W06-3114_sweta,W06-3114,1,170,"In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.",This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive C10-1045,C10-1045,1,1,"This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.",This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W99-0623_vardha,W99-0623,1,0,This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.,This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive C02-1025,C02-1025,1,1,This paper presents a maximum entropy-based named entity recognizer (NER).,This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W11-2123_vardha,W11-2123,1,0,This paper talks about KenLM: Faster and Smaller Language Model Queries.,This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive N04-1038,N04-1038,3,28,"BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.","We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive J96-3004,J96-3004,1,0,In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.,This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W06-3114_sweta,W06-3114,2,35,Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.,"A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W11-2123_vardha,W11-2123,5,199,"For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.",Global features are extracted from other occurrences of the same token in the whole document.,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W06-3114_swastika,W06-3114,5,174,The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.,Global features are extracted from other occurrences of the same token in the whole document.,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W99-0623_vardha,W99-0623,7,144,Combining multiple highly-accurate independent parsers yields promising results.,"By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive N04-1038,N04-1038,5,87,"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.","These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive N04-1038,N04-1038,6,245,"Finally, several coreference systems have successfully incorporated anaphoricity determination modules.","By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive C02-1025,C02-1025,6,10,The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.,"By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive P11-1061_swastika,P11-1061,3,3,They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.,"We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive D10-1044_swastika,D10-1044,8,151,They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.,"Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W06-3114_sweta,W06-3114,4,173,The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.,Global features are extracted from other occurrences of the same token in the whole document.,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W99-0623_vardha,W99-0623,3,3,Here both parametric and non-parametric models are explored.,"We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive J96-3004,J96-3004,5,70,The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.,Global features are extracted from other occurrences of the same token in the whole document.,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W99-0613_vardha,W99-0613,6,34,The AdaBoost algorithm was developed for supervised learning.,"These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive P05-1013_vardha,P05-1013,2,1,"In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.","A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive C00-2123,C00-2123,2,2,"From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.","A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W99-0613_vardha,W99-0613,8,255,"The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.","Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W11-2123_vardha,W11-2123,3,7,"This paper presents methods to query N-gram language models, minimizing time and space costs.","We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive D10-1044_swastika,D10-1044,7,150,"Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.","By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive P11-1061_swastika,P11-1061,5,158,They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.,Global features are extracted from other occurrences of the same token in the whole document.,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive P08-1043_swastika,P08-1043,2,187,They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.,"A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive D10-1083,D10-1083,4,34,Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.,"Local features are features that are based on neighboring tokens, as well as the token itself.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive C00-2123,C00-2123,4,166,There is no global pruning.,"Local features are features that are based on neighboring tokens, as well as the token itself.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W99-0613_vardha,W99-0613,5,33,The second algorithm builds on a boosting algorithm called AdaBoost.,Global features are extracted from other occurrences of the same token in the whole document.,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W99-0623_vardha,W99-0623,4,37,"One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.","Local features are features that are based on neighboring tokens, as well as the token itself.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W04-0213,W04-0213,6,1,"It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.","By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W04-0213,W04-0213,1,0,"This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.",This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive P11-1061_swastika,P11-1061,1,1,"Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.",This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive P11-1061_swastika,P11-1061,7,161,"It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.","By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive D10-1044_swastika,D10-1044,4,144,"In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.","Local features are features that are based on neighboring tokens, as well as the token itself.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive I05-5011,I05-5011,3,2,They proposed an unsupervised method to discover paraphrases from a large untagged corpus.,"We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive I05-5011,I05-5011,1,0,This paper conducted research in the area of automatic paraphrase discovery.,This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W99-0613_vardha,W99-0613,2,2,"A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.","A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive C10-1045,C10-1045,5,22,The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.,Global features are extracted from other occurrences of the same token in the whole document.,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive P11-1061_swastika,P11-1061,4,4,"Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.","Local features are features that are based on neighboring tokens, as well as the token itself.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W99-0613_vardha,W99-0613,3,4,Here we present two algorithms.,"We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W06-3114_swastika,W06-3114,1,170,Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.,This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W11-2123_vardha,W11-2123,2,2,The PROBING data structure uses linear probing hash tables and is designed for speed.,"A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive D10-1083,D10-1083,9,239,Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.,"In this paper, we offer broad insight into the underperformance of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.","['Better Arabic Parsing: Baselines, Evaluations, and Analysis', 'In this paper, we offer broad insight into the underperformance of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.', 'First, we identify sources of syntactic ambiguity understudied in the existing parsing literature.', 'Second, we show that although the Penn Arabic Treebank is similar to other tree- banks in gross statistical terms, annotation consistency remains problematic.', 'Third, we develop a human interpretable grammar that is competitive with a latent variable PCFG.', 'Fourth, we show how to build better models for three different parsers.', 'Finally, we show that in application settings, the absence of gold segmentation lowers parsing performance by 2â\x80\x935% F1.', 'It is well-known that constituency parsing models designed for English often do not generalize easily to other languages and treebanks.1 Explanations for this phenomenon have included the relative informativeness of lexicalization (Dubey and Keller, 2003; Arun and Keller, 2005), insensitivity to morphology (Cowan and Collins, 2005; Tsarfaty and Simaâ\x80\x99an, 2008), and the effect of variable word order (Collins et al., 1999).', 'Certainly these linguistic factors increase the difficulty of syntactic disambiguation.', 'Less frequently studied is the interplay among language, annotation choices, and parsing model design (Levy and Manning, 2003; Ku¨ bler, 2005).', '1 The apparent difficulty of adapting constituency models to non-configurational languages has been one motivation for dependency representations (HajicË\x87 and Zema´nek, 2004; Habash and Roth, 2009).', 'To investigate the influence of these factors, we analyze Modern Standard Arabic (henceforth MSA, or simply â\x80\x9cArabicâ\x80\x9d) because of the unusual opportunity it presents for comparison to English parsing results.', 'The Penn Arabic Treebank (ATB) syntactic guidelines (Maamouri et al., 2004) were purposefully borrowed without major modification from English (Marcus et al., 1993).', 'Further, Maamouri and Bies (2004) argued that the English guidelines generalize well to other languages.', 'But Arabic contains a variety of linguistic phenomena unseen in English.', 'Crucially, the conventional orthographic form of MSA text is unvocalized, a property that results in a deficient graphical representation.', 'For humans, this characteristic can impede the acquisition of literacy.', 'How do additional ambiguities caused by devocalization affect statistical learning?', 'How should the absence of vowels and syntactic markers influence annotation choices and grammar development?', 'Motivated by these questions, we significantly raise baselines for three existing parsing models through better grammar engineering.', 'Our analysis begins with a description of syntactic ambiguity in unvocalized MSA text (§2).', 'Next we show that the ATB is similar to other tree- banks in gross statistical terms, but that annotation consistency remains low relative to English (§3).', 'We then use linguistic and annotation insights to develop a manually annotated grammar for Arabic (§4).', 'To facilitate comparison with previous work, we exhaustively evaluate this grammar and two other parsing models when gold segmentation is assumed (§5).', 'Finally, we provide a realistic eval uation in which segmentation is performed both in a pipeline and jointly with parsing (§6).', 'We quantify error categories in both evaluation settings.', 'To our knowledge, ours is the first analysis of this kind for Arabic parsing.', 'Arabic is a morphologically rich language with a root-and-pattern system similar to other Semitic languages.', 'The basic word order is VSO, but SVO, VOS, and VO configurations are also possible.2 Nouns and verbs are created by selecting a consonantal root (usually triliteral or quadriliteral), which bears the semantic core, and adding affixes and diacritics.', 'Particles are uninflected.', ""Word Head Of Complement POS 1 '01 inna â\x80\x9cIndeed, trulyâ\x80\x9d VP Noun VBP 2 '01 anna â\x80\x9cThatâ\x80\x9d SBAR Noun IN 3 01 in â\x80\x9cIfâ\x80\x9d SBAR Verb IN 4 01 an â\x80\x9ctoâ\x80\x9d SBAR Verb IN Table 1: Diacritized particles and pseudo-verbs that, after orthographic normalization, have the equivalent surface form 0 an."", 'The distinctions in the ATB are linguistically justified, but complicate parsing.', 'Table 8a shows that the best model recovers SBAR at only 71.0% F1.', 'Diacritics can also be used to specify grammatical relations such as case and gender.', 'But diacritics are not present in unvocalized text, which is the standard form of, e.g., news media documents.3 VBD she added VP PUNC S VP VBP NP ...', 'VBD she added VP PUNC â\x80\x9c SBAR IN NP 0 NN.', 'Let us consider an example of ambiguity caused by devocalization.', 'Table 1 shows four words â\x80\x9c 0 Indeed NN Indeed Saddamwhose unvocalized surface forms 0 an are indistinguishable.', 'Whereas Arabic linguistic theory as Saddam (a) Reference (b) Stanford signs (1) and (2) to the class of pseudo verbs 01 +i J>1� inna and her sisters since they can beinflected, the ATB conventions treat (2) as a com plementizer, which means that it must be the head of SBAR.', 'Because these two words have identical complements, syntax rules are typically unhelpful for distinguishing between them.', 'This is especially true in the case of quotationsâ\x80\x94which are common in the ATBâ\x80\x94where (1) will follow a verb like (2) (Figure 1).', 'Even with vocalization, there are linguistic categories that are difficult to identify without semantic clues.', 'Two common cases are the attribu tive adjective and the process nominal _; maSdar, which can have a verbal reading.4 At tributive adjectives are hard because they are or- thographically identical to nominals; they are inflected for gender, number, case, and definiteness.', 'Moreover, they are used as substantives much 2 Unlike machine translation, constituency parsing is not significantly affected by variable word order.', 'However, when grammatical relations like subject and object are evaluated, parsing performance drops considerably (Green et al., 2009).', 'In particular, the decision to represent arguments in verb- initial clauses as VP internal makes VSO and VOS configurations difficult to distinguish.', 'Topicalization of NP subjects in SVO configurations causes confusion with VO (pro-drop).', '3 Techniques for automatic vocalization have been studied (Zitouni et al., 2006; Habash and Rambow, 2007).', 'However, the data sparsity induced by vocalization makes it difficult to train statistical models on corpora of the size of the ATB, so vocalizing and then parsing may well not help performance.', ""4 Traditional Arabic linguistic theory treats both of these types as subcategories of noun � '.i . Figure 1: The Stanford parser (Klein and Manning, 2002) is unable to recover the verbal reading of the unvocalized surface form 0 an (Table 1)."", 'more frequently than is done in English.', 'Process nominals name the action of the transitive or ditransitive verb from which they derive.', 'The verbal reading arises when the maSdar has an NP argument which, in vocalized text, is marked in the accusative case.', 'When the maSdar lacks a determiner, the constituent as a whole resem bles the ubiquitous annexation construct � ?f iDafa.', 'Gabbard and Kulick (2008) show that there is significant attachment ambiguity associated with iDafa, which occurs in 84.3% of the trees in our development set.', 'Figure 4 shows a constituent headed by a process nominal with an embedded adjective phrase.', 'All three models evaluated in this paper incorrectly analyze the constituent as iDafa; none of the models attach the attributive adjectives properly.', 'For parsing, the most challenging form of ambiguity occurs at the discourse level.', 'A defining characteristic of MSA is the prevalence of discourse markers to connect and subordinate words and phrases (Ryding, 2005).', 'Instead of offsetting new topics with punctuation, writers of MSA in sert connectives such as � wa and � fa to link new elements to both preceding clauses and the text as a whole.', 'As a result, Arabic sentences are usually long relative to English, especially after Length English (WSJ) Arabic (ATB) â\x89¤ 20 41.9% 33.7% â\x89¤ 40 92.4% 73.2% â\x89¤ 63 99.7% 92.6% â\x89¤ 70 99.9% 94.9% Table 2: Frequency distribution for sentence lengths in the WSJ (sections 2â\x80\x9323) and the ATB (p1â\x80\x933).', 'English parsing evaluations usually report results on sentences up to length 40.', 'Arabic sentences of up to length 63 would need to be.', 'evaluated to account for the same fraction of the data.', 'We propose a limit of 70 words for Arabic parsing evaluations.', 'ATB CTB6 Negra WSJ Trees 23449 28278 20602 43948 Word Typess 40972 45245 51272 46348 Tokens 738654 782541 355096 1046829 Tags 32 34 499 45 Phrasal Cats 22 26 325 27 Test OOV 16.8% 22.2% 30.5% 13.2% Per Sentence Table 4: Gross statistics for several different treebanks.', 'Test set OOV rate is computed using the following splits: ATB (Chiang et al., 2006); CTB6 (Huang and Harper, 2009); Negra (Dubey and Keller, 2003); English, sections 221 (train) and section 23 (test).', 'Table 3: Dev set frequencies for the two most significant discourse markers in Arabic are skewed toward analysis as a conjunction.', 'segmentation (Table 2).', 'The ATB gives several different analyses to these words to indicate different types of coordination.', 'But it conflates the coordinating and discourse separator functions of wa (<..4.b � �) into one analysis: conjunction(Table 3).', 'A better approach would be to distin guish between these cases, possibly by drawing on the vast linguistic work on Arabic connectives (AlBatal, 1990).', 'We show that noun-noun vs. discourse-level coordination ambiguity in Arabic is a significant source of parsing errors (Table 8c).', '3.1 Gross Statistics.', 'Linguistic intuitions like those in the previous section inform language-specific annotation choices.', 'The resulting structural differences between tree- banks can account for relative differences in parsing performance.', 'We compared the ATB5 to tree- banks for Chinese (CTB6), German (Negra), and English (WSJ) (Table 4).', 'The ATB is disadvantaged by having fewer trees with longer average 5 LDC A-E catalog numbers: LDC2008E61 (ATBp1v4), LDC2008E62 (ATBp2v3), and LDC2008E22 (ATBp3v3.1).', 'We map the ATB morphological analyses to the shortened â\x80\x9cBiesâ\x80\x9d tags for all experiments.', 'yields.6 But to its great advantage, it has a high ratio of non-terminals/terminals (μ Constituents / μ Length).', 'Evalb, the standard parsing metric, is biased toward such corpora (Sampson and Babarczy, 2003).', 'Also surprising is the low test set OOV rate given the possibility of morphological variation in Arabic.', 'In general, several gross corpus statistics favor the ATB, so other factors must contribute to parsing underperformance.', '3.2 Inter-annotator Agreement.', 'Annotation consistency is important in any supervised learning task.', 'In the initial release of the ATB, inter-annotator agreement was inferior to other LDC treebanks (Maamouri et al., 2008).', 'To improve agreement during the revision process, a dual-blind evaluation was performed in which 10% of the data was annotated by independent teams.', 'Maamouri et al.', '(2008) reported agreement between the teams (measured with Evalb) at 93.8% F1, the level of the CTB.', 'But Rehbein and van Genabith (2007) showed that Evalb should not be used as an indication of real differenceâ\x80\x94 or similarityâ\x80\x94between treebanks.', 'Instead, we extend the variation n-gram method of Dickinson (2005) to compare annotation error rates in the WSJ and ATB.', 'For a corpus C, let M be the set of tuples â\x88\x97n, l), where n is an n-gram with bracketing label l. If any n appears 6 Generative parsing performance is known to deteriorate with sentence length.', 'As a result, Habash et al.', '(2006) developed a technique for splitting and chunking long sentences.', 'In application settings, this may be a profitable strategy.', 'NN � .e NP NNP NP DTNNP NN � .e NP NP NNP NP Table 5: Evaluation of 100 randomly sampled variation nuclei types.', 'The samples from each corpus were independently evaluated.', 'The ATB has a much higher fraction of nuclei per tree, and a higher type-level error rate.', 'summit Sharm (a) Al-Sheikh summit Sharm (b) DTNNP Al-Sheikh in a corpus position without a bracketing label, then we also add â\x88\x97n, NIL) to M. We call the set of unique n-grams with multiple labels in M the variation nuclei of C. Bracketing variation can result from either annotation errors or linguistic ambiguity.', 'Human evaluation is one way to distinguish between the two cases.', 'Following Dickinson (2005), we randomly sampled 100 variation nuclei from each corpus and evaluated each sample for the presence of an annotation error.', 'The human evaluators were a non-native, fluent Arabic speaker (the first author) for the ATB and a native English speaker for the WSJ.7 Table 5 shows type- and token-level error rates for each corpus.', 'The 95% confidence intervals for type-level errors are (5580, 9440) for the ATB and (1400, 4610) for the WSJ.', 'The results clearly indicate increased variation in the ATB relative to the WSJ, but care should be taken in assessing the magnitude of the difference.', 'On the one hand, the type-level error rate is not calibrated for the number of n-grams in the sample.', 'At the same time, the n-gram error rate is sensitive to samples with extreme n-gram counts.', 'For example, one of the ATB samples was the determiner -"""" ; dhalikâ\x80\x9cthat.â\x80\x9d The sample occurred in 1507 corpus po sitions, and we found that the annotations were consistent.', 'If we remove this sample from the evaluation, then the ATB type-level error rises to only 37.4% while the n-gram error rate increases to 6.24%.', 'The number of ATB n-grams also falls below the WSJ sample size as the largest WSJ sample appeared in only 162 corpus positions.', '7 Unlike Dickinson (2005), we strip traces and only con-.', 'Figure 2: An ATB sample from the human evaluation.', 'The ATB annotation guidelines specify that proper nouns should be specified with a flat NP (a).', 'But the city name Sharm Al- Sheikh is also iDafa, hence the possibility for the incorrect annotation in (b).', 'We can use the preceding linguistic and annotation insights to build a manually annotated Arabic grammar in the manner of Klein and Manning (2003).', 'Manual annotation results in human in- terpretable grammars that can inform future tree- bank annotation decisions.', 'A simple lexicalized PCFG with second order Markovization gives relatively poor performance: 75.95% F1 on the test set.8 But this figure is surprisingly competitive with a recent state-of-the-art baseline (Table 7).', 'In our grammar, features are realized as annotations to basic category labels.', 'We start with noun features since written Arabic contains a very high proportion of NPs.', 'genitiveMark indicates recursive NPs with a indefinite nominal left daughter and an NP right daughter.', 'This is the form of recursive levels in iDafa constructs.', 'We also add an annotation for one-level iDafa (oneLevelIdafa) constructs since they make up more than 75% of the iDafa NPs in the ATB (Gabbard and Kulick, 2008).', 'For all other recursive NPs, we add a common annotation to the POS tag of the head (recursiveNPHead).', 'Base NPs are the other significant category of nominal phrases.', 'markBaseNP indicates these non-recursive nominal phrases.', 'This feature includes named entities, which the ATB marks with a flat NP node dominating an arbitrary number of NNP pre-terminal daughters (Figure 2).', 'For verbs we add two features.', 'First we mark any node that dominates (at any level) a verb sider POS tags when pre-terminals are the only intervening nodes between the nucleus and its bracketing (e.g., unaries, base NPs).', 'Since our objective is to compare distributions of bracketing discrepancies, we do not use heuristics to prune the set of nuclei.', '8 We use head-finding rules specified by a native speaker.', 'of Arabic.', 'This PCFG is incorporated into the Stanford Parser, a factored model that chooses a 1-best parse from the product of constituency and dependency parses.', 'termined by the category of the word that follows it.', 'Because conjunctions are elevated in the parse trees when they separate recursive constituents, we choose the right sister instead of the category of the next word.', 'We create equivalence classes for verb, noun, and adjective POS categories.', 'Table 6: Incremental dev set results for the manually annotated grammar (sentences of length â\x89¤ 70).', 'phrase (markContainsVerb).', 'This feature has a linguistic justification.', ""Historically, Arabic grammar has identified two sentences types: those that begin with a nominal (� '.i �u _.."", '), and thosethat begin with a verb (� ub..i �u _..', 'But for eign learners are often surprised by the verbless predications that are frequently used in Arabic.', 'Although these are technically nominal, they have become known as â\x80\x9cequationalâ\x80\x9d sentences.', 'mark- ContainsVerb is especially effective for distinguishing root S nodes of equational sentences.', 'We also mark all nodes that dominate an SVO configuration (containsSVO).', 'In MSA, SVO usually appears in non-matrix clauses.', 'Lexicalizing several POS tags improves performance.', 'splitIN captures the verb/preposition idioms that are widespread in Arabic.', 'Although this feature helps, we encounter one consequence of variable word order.', 'Unlike the WSJ corpus which has a high frequency of rules like VP â\x86\x92VB PP, Arabic verb phrases usually have lexi calized intervening nodes (e.g., NP subjects and direct objects).', 'For example, we might have VP â\x86\x92 VB NP PP, where the NP is the subject.', 'This annotation choice weakens splitIN.', 'We compare the manually annotated grammar, which we incorporate into the Stanford parser, to both the Berkeley (Petrov et al., 2006) and Bikel (Bikel, 2004) parsers.', 'All experiments use ATB parts 1â\x80\x933 divided according to the canonical split suggested by Chiang et al.', '(2006).', 'Preprocessing the raw trees improves parsing performance considerably.9 We first discard all trees dominated by X, which indicates errors and non-linguistic text.', 'At the phrasal level, we remove all function tags and traces.', 'We also collapse unary chains withidentical basic categories like NP â\x86\x92 NP.', 'The pre terminal morphological analyses are mapped to the shortened â\x80\x9cBiesâ\x80\x9d tags provided with the tree- bank.', 'Finally, we add â\x80\x9cDTâ\x80\x9d to the tags for definite nouns and adjectives (Kulick et al., 2006).', 'The orthographic normalization strategy we use is simple.10 In addition to removing all diacritics, we strip instances of taTweel J=J4.i, collapse variants of alif to bare alif,11 and map Ara bic punctuation characters to their Latin equivalents.', 'We retain segmentation markersâ\x80\x94which are consistent only in the vocalized section of the treebankâ\x80\x94to differentiate between e.g. � â\x80\x9ctheyâ\x80\x9d and � + â\x80\x9ctheir.â\x80\x9d Because we use the vocalized section, we must remove null pronoun markers.', 'In Table 7 we give results for several evaluation metrics.', 'Evalb is a Java re-implementation of the standard labeled precision/recall metric.12 The ATB gives all punctuation a single tag.', 'For parsing, this is a mistake, especially in the case of interrogatives.', 'splitPUNC restores the convention of the WSJ.', 'We also mark all tags that dominate a word with the feminine ending :: taa mar buuTa (markFeminine).', 'To differentiate between the coordinating and discourse separator functions of conjunctions (Table 3), we mark each CC with the label of its right sister (splitCC).', 'The intuition here is that the role of a discourse marker can usually be de 9 Both the corpus split and pre-processing code are avail-.', 'able at http://nlp.stanford.edu/projects/arabic.shtml.', '10 Other orthographic normalization schemes have been suggested for Arabic (Habash and Sadat, 2006), but we observe negligible parsing performance differences between these and the simple scheme used in this evaluation.', '11 taTweel (-) is an elongation character used in Arabic script to justify text.', 'It has no syntactic function.', 'Variants of alif are inconsistently used in Arabic texts.', 'For alif with hamza, normalization can be seen as another level of devocalization.', '12 For English, our Evalb implementation is identical to the most recent reference (EVALB20080701).', 'For Arabic we M o d e l S y s t e m L e n g t h L e a f A n c e s t o r Co rpu s Sent Exact E v a l b L P LR F1 T a g % B a s e l i n e 7 0 St an for d (v 1.', '6. 3) all G o l d P O S 7 0 0.7 91 0.825 358 0.7 73 0.818 358 0.8 02 0.836 452 80.', '37 79.', '36 79.', '86 78.', '92 77.', '72 78.', '32 81.', '07 80.', '27 80.', '67 95.', '58 95.', '49 99.', '95 B a s e li n e ( S e lf t a g ) 70 a l l B i k e l ( v 1 . 2 ) B a s e l i n e ( P r e t a g ) 7 0 a l l G o l d P O S 70 0.7 70 0.801 278 0.7 52 0.794 278 0.7 71 0.804 295 0.7 52 0.796 295 0.7 75 0.808 309 77.', '92 76.', '00 76.', '95 76.', '96 75.', '01 75.', '97 78.', '35 76.', '72 77.', '52 77.', '31 75.', '64 76.', '47 78.', '83 77.', '18 77.', '99 94.', '64 94.', '63 95.', '68 95.', '68 96.', '60 ( P e tr o v, 2 0 0 9 ) all B e r k e l e y ( S e p . 0 9 ) B a s e l i n e 7 0 a l l G o l d P O S 70 â\x80\x94 â\x80\x94 â\x80\x94 0 . 8 0 9 0.839 335 0 . 7 9', '0 . 8 3 1 0.859 496 76.', '40 75.', '30 75.', '85 82.', '32 81.', '63 81.', '97 81.', '43 80.', '73 81.', '08 84.', '37 84.', '21 84.', '29 â\x80\x94 95.', '07 95.', '02 99.', '87 Table 7: Test set results.', 'Maamouri et al.', '(2009b) evaluated the Bikel parser using the same ATB split, but only reported dev set results with gold POS tags for sentences of length â\x89¤ 40.', 'The Bikel GoldPOS configuration only supplies the gold POS tags; it does not force the parser to use them.', 'We are unaware of prior results for the Stanford parser.', 'F1 85 Berkeley 80 Stanford.', 'Bikel 75 training trees 5000 10000 15000 Figure 3: Dev set learning curves for sentence lengths â\x89¤ 70.', 'All three curves remain steep at the maximum training set size of 18818 trees.', 'The Leaf Ancestor metric measures the cost of transforming guess trees to the reference (Sampson and Babarczy, 2003).', 'It was developed in response to the non-terminal/terminal bias of Evalb, but Clegg and Shepherd (2005) showed that it is also a valuable diagnostic tool for trees with complex deep structures such as those found in the ATB.', 'For each terminal, the Leaf Ancestor metric extracts the shortest path to the root.', 'It then computes a normalized Levenshtein edit distance between the extracted chain and the reference.', 'The range of the score is between 0 and 1 (higher is better).', 'We report micro-averaged (whole corpus) and macro-averaged (per sentence) scores along add a constraint on the removal of punctuation, which has a single tag (PUNC) in the ATB.', 'Tokens tagged as PUNC are not discarded unless they consist entirely of punctuation.', 'with the number of exactly matching guess trees.', '5.1 Parsing Models.', 'The Stanford parser includes both the manually annotated grammar (§4) and an Arabic unknown word model with the following lexical features: 1.', 'Presence of the determiner J Al. 2.', 'Contains digits.', '3.', 'Ends with the feminine affix :: p. 4.', 'Various verbal (e.g., �, .::) and adjectival.', 'suffixes (e.g., �=) Other notable parameters are second order vertical Markovization and marking of unary rules.', 'Modifying the Berkeley parser for Arabic is straightforward.', 'After adding a ROOT node to all trees, we train a grammar using six split-and- merge cycles and no Markovization.', 'We use the default inference parameters.', 'Because the Bikel parser has been parameter- ized for Arabic by the LDC, we do not change the default model settings.', 'However, when we pre- tag the inputâ\x80\x94as is recommended for Englishâ\x80\x94 we notice a 0.57% F1 improvement.', 'We use the log-linear tagger of Toutanova et al.', '(2003), which gives 96.8% accuracy on the test set.', '5.2 Discussion.', 'The Berkeley parser gives state-of-the-art performance for all metrics.', 'Our baseline for all sentence lengths is 5.23% F1 higher than the best previous result.', 'The difference is due to more careful S-NOM NP NP NP VP VBG :: b NP restoring NP ADJP NN :: b NP NN NP NP ADJP DTJJ ADJP DTJJ NN :: b NP NP NP ADJP ADJP DTJJ J ..i NN :: b NP NP NP ADJP ADJP DTJJ NN _;� NP PRP DTJJ DTJJ J ..i _;� PRP J ..i NN _;� NP PRP DTJJ NN _;� NP PRP DTJJ J ..i role its constructive effective (b) Stanford (c) Berkeley (d) Bik el (a) Reference Figure 4: The constituent Restoring of its constructive and effective role parsed by the three different models (gold segmentation).', 'The ATB annotation distinguishes between verbal and nominal readings of maSdar process nominals.', 'Like verbs, maSdar takes arguments and assigns case to its objects, whereas it also demonstrates nominal characteristics by, e.g., taking determiners and heading iDafa (Fassi Fehri, 1993).', 'In the ATB, :: b astaâ\x80\x99adah is tagged 48 times as a noun and 9 times as verbal noun.', 'Consequently, all three parsers prefer the nominal reading.', 'Table 8b shows that verbal nouns are the hardest pre-terminal categories to identify.', 'None of the models attach the attributive adjectives correctly.', 'pre-processing.', 'However, the learning curves in Figure 3 show that the Berkeley parser does not exceed our manual grammar by as wide a margin as has been shown for other languages (Petrov, 2009).', 'Moreover, the Stanford parser achieves the most exact Leaf Ancestor matches and tagging accuracy that is only 0.1% below the Bikel model, which uses pre-tagged input.', 'In Figure 4 we show an example of variation between the parsing models.', 'We include a list of per-category results for selected phrasal labels, POS tags, and dependencies in Table 8.', 'The errors shown are from the Berkeley parser output, but they are representative of the other two parsing models.', '6 Joint Segmentation and Parsing.', 'Although the segmentation requirements for Arabic are not as extreme as those for Chinese, Arabic is written with certain cliticized prepositions, pronouns, and connectives connected to adjacent words.', 'Since these are distinct syntactic units, they are typically segmented.', 'The ATB segmentation scheme is one of many alternatives.', 'Until now, all evaluations of Arabic parsingâ\x80\x94including the experiments in the previous sectionâ\x80\x94have assumed gold segmentation.', 'But gold segmentation is not available in application settings, so a segmenter and parser are arranged in a pipeline.', 'Segmentation errors cascade into the parsing phase, placing an artificial limit on parsing performance.', 'Lattice parsing (Chappelier et al., 1999) is an alternative to a pipeline that prevents cascading errors by placing all segmentation options into the parse chart.', 'Recently, lattices have been used successfully in the parsing of Hebrew (Tsarfaty, 2006; Cohen and Smith, 2007), a Semitic language with similar properties to Arabic.', 'We extend the Stanford parser to accept pre-generated lattices, where each word is represented as a finite state automaton.', 'To combat the proliferation of parsing edges, we prune the lattices according to a hand-constructed lexicon of 31 clitics listed in the ATB annotation guidelines (Maamouri et al., 2009a).', 'Formally, for a lexicon L and segments I â\x88\x88 L, O â\x88\x88/ L, each word automaton accepts the language Iâ\x88\x97(O + I)Iâ\x88\x97.', 'Aside from adding a simple rule to correct alif deletion caused by the preposition J, no other language-specific processing is performed.', 'Our evaluation includes both weighted and un- weighted lattices.', 'We weight edges using a unigram language model estimated with Good- Turing smoothing.', 'Despite their simplicity, uni- gram weights have been shown as an effective feature in segmentation models (Dyer, 2009).13 The joint parser/segmenter is compared to a pipeline that uses MADA (v3.0), a state-of-the-art Arabic segmenter, configured to replicate ATB segmentation (Habash and Rambow, 2005).', 'MADA uses an ensemble of SVMs to first re-rank the output of a deterministic morphological analyzer.', 'For each 13 Of course, this weighting makes the PCFG an improper distribution.', 'However, in practice, unknown word models also make the distribution improper.', 'Parent Head Modif er Dir # gold F1 Label # gold F1 NP NP TAG R 946 0.54 ADJP 1216 59.45 S S S R 708 0.57 SBAR 2918 69.81 NP NP ADJ P R 803 0.64 FRAG 254 72.87 NP NP N P R 2907 0.66 VP 5507 78.83 NP NP SBA R R 1035 0.67 S 6579 78.91 NP NP P P R 2713 0.67 PP 7516 80.93 VP TAG P P R 3230 0.80 NP 34025 84.95 NP NP TAG L 805 0.85 ADVP 1093 90.64 VP TAG SBA R R 772 0.86 WHN P 787 96.00 S VP N P L 961 0.87 (a) Major phrasal categories (b) Major POS categories (c) Ten lowest scoring (Collins, 2003)-style dependencies occurring more than 700 times Table 8: Per category performance of the Berkeley parser on sentence lengths â\x89¤ 70 (dev set, gold segmentation).', '(a) Of the high frequency phrasal categories, ADJP and SBAR are the hardest to parse.', 'We showed in §2 that lexical ambiguity explains the underperformance of these categories.', '(b) POS tagging accuracy is lowest for maSdar verbal nouns (VBG,VN) and adjectives (e.g., JJ).', 'Richer tag sets have been suggested for modeling morphologically complex distinctions (Diab, 2007), but we find that linguistically rich tag sets do not help parsing.', '(c) Coordination ambiguity is shown in dependency scores by e.g., â\x88\x97SSS R) and â\x88\x97NP NP NP R).', 'â\x88\x97NP NP PP R) and â\x88\x97NP NP ADJP R) are both iDafa attachment.', 'input token, the segmentation is then performed deterministically given the 1-best analysis.', 'Since guess and gold trees may now have different yields, the question of evaluation is complex.', 'Cohen and Smith (2007) chose a metric like SParseval (Roark et al., 2006) that first aligns the trees and then penalizes segmentation errors with an edit-distance metric.', 'But we follow the more direct adaptation of Evalb suggested by Tsarfaty (2006), who viewed exact segmentation as the ultimate goal.', 'Therefore, we only score guess/gold pairs with identical character yields, a condition that allows us to measure parsing, tagging, and segmentation accuracy by ignoring whitespace.', 'Table 9 shows that MADA produces a high quality segmentation, and that the effect of cascading segmentation errors on parsing is only 1.92% F1.', 'However, MADA is language-specific and relies on manually constructed dictionaries.', 'Conversely, the lattice parser requires no linguistic resources and produces segmentations of comparable quality.', 'Nonetheless, parse quality is much lower in the joint model because a lattice is effectively a long sentence.', 'A cell in the bottom row of the parse chart is required for each potential whitespace boundary.', 'As we have said, parse quality decreases with sentence length.', 'Finally, we note that simple weighting gives nearly a 2% F1 improvement, whereas Goldberg and Tsarfaty (2008) found that unweighted lattices were more effective for Hebrew.', 'Table 9: Dev set results for sentences of length â\x89¤ 70.', 'Coverage indicates the fraction of hypotheses in which the character yield exactly matched the reference.', 'Each model was able to produce hypotheses for all input sentences.', 'In these experiments, the input lacks segmentation markers, hence the slightly different dev set baseline than in Table 6.', 'By establishing significantly higher parsing baselines, we have shown that Arabic parsing performance is not as poor as previously thought, but remains much lower than English.', 'We have described grammar state splits that significantly improve parsing performance, catalogued parsing errors, and quantified the effect of segmentation errors.', 'With a human evaluation we also showed that ATB inter-annotator agreement remains low relative to the WSJ corpus.', 'Our results suggest that current parsing models would benefit from better annotation consistency and enriched annotation in certain syntactic configurations.', 'Acknowledgments We thank Steven Bethard, Evan Rosen, and Karen Shiells for material contributions to this work.', 'We are also grateful to Markus Dickinson, Ali Farghaly, Nizar Habash, Seth Kulick, David McCloskey, Claude Reichard, Ryan Roth, and Reut Tsarfaty for constructive discussions.', 'The first author is supported by a National Defense Science and Engineering Graduate (NDSEG) fellowship.', 'This paper is based on work supported in part by DARPA through IBM.', 'The content does not necessarily reflect the views of the U.S. Government, and no official endorsement should be inferred.']",extractive W99-0613_vardha,W99-0613,4,27,The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).,"Local features are features that are based on neighboring tokens, as well as the token itself.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive N04-1038,N04-1038,5,71,"Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.",Global features are extracted from other occurrences of the same token in the whole document.,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive D10-1083,D10-1083,7,237,The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.,"By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive D10-1083,D10-1083,3,19,"In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.","We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive P05-1013_vardha,P05-1013,6,110,"The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.","These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive I05-5011,I05-5011,7,158,The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.,"By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive P11-1061_swastika,P11-1061,6,160,"Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.","These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W11-2123_vardha,W11-2123,6,276,The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.,"These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive P87-1015_swastika,P87-1015,3,2,"On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.","We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive I05-5011,I05-5011,4,3,"They focused on phrases which two Named Entities, and proceed in two stages.","Local features are features that are based on neighboring tokens, as well as the token itself.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W04-0213,W04-0213,7,154,"Nevertheless, only a part of this corpus (10 texts), which the authors name ""core corpus"", is annotated with all this information.","In this paper, we offer broad insight into the underperformance of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.","['Better Arabic Parsing: Baselines, Evaluations, and Analysis', 'In this paper, we offer broad insight into the underperformance of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.', 'First, we identify sources of syntactic ambiguity understudied in the existing parsing literature.', 'Second, we show that although the Penn Arabic Treebank is similar to other tree- banks in gross statistical terms, annotation consistency remains problematic.', 'Third, we develop a human interpretable grammar that is competitive with a latent variable PCFG.', 'Fourth, we show how to build better models for three different parsers.', 'Finally, we show that in application settings, the absence of gold segmentation lowers parsing performance by 2â\x80\x935% F1.', 'It is well-known that constituency parsing models designed for English often do not generalize easily to other languages and treebanks.1 Explanations for this phenomenon have included the relative informativeness of lexicalization (Dubey and Keller, 2003; Arun and Keller, 2005), insensitivity to morphology (Cowan and Collins, 2005; Tsarfaty and Simaâ\x80\x99an, 2008), and the effect of variable word order (Collins et al., 1999).', 'Certainly these linguistic factors increase the difficulty of syntactic disambiguation.', 'Less frequently studied is the interplay among language, annotation choices, and parsing model design (Levy and Manning, 2003; Ku¨ bler, 2005).', '1 The apparent difficulty of adapting constituency models to non-configurational languages has been one motivation for dependency representations (HajicË\x87 and Zema´nek, 2004; Habash and Roth, 2009).', 'To investigate the influence of these factors, we analyze Modern Standard Arabic (henceforth MSA, or simply â\x80\x9cArabicâ\x80\x9d) because of the unusual opportunity it presents for comparison to English parsing results.', 'The Penn Arabic Treebank (ATB) syntactic guidelines (Maamouri et al., 2004) were purposefully borrowed without major modification from English (Marcus et al., 1993).', 'Further, Maamouri and Bies (2004) argued that the English guidelines generalize well to other languages.', 'But Arabic contains a variety of linguistic phenomena unseen in English.', 'Crucially, the conventional orthographic form of MSA text is unvocalized, a property that results in a deficient graphical representation.', 'For humans, this characteristic can impede the acquisition of literacy.', 'How do additional ambiguities caused by devocalization affect statistical learning?', 'How should the absence of vowels and syntactic markers influence annotation choices and grammar development?', 'Motivated by these questions, we significantly raise baselines for three existing parsing models through better grammar engineering.', 'Our analysis begins with a description of syntactic ambiguity in unvocalized MSA text (§2).', 'Next we show that the ATB is similar to other tree- banks in gross statistical terms, but that annotation consistency remains low relative to English (§3).', 'We then use linguistic and annotation insights to develop a manually annotated grammar for Arabic (§4).', 'To facilitate comparison with previous work, we exhaustively evaluate this grammar and two other parsing models when gold segmentation is assumed (§5).', 'Finally, we provide a realistic eval uation in which segmentation is performed both in a pipeline and jointly with parsing (§6).', 'We quantify error categories in both evaluation settings.', 'To our knowledge, ours is the first analysis of this kind for Arabic parsing.', 'Arabic is a morphologically rich language with a root-and-pattern system similar to other Semitic languages.', 'The basic word order is VSO, but SVO, VOS, and VO configurations are also possible.2 Nouns and verbs are created by selecting a consonantal root (usually triliteral or quadriliteral), which bears the semantic core, and adding affixes and diacritics.', 'Particles are uninflected.', ""Word Head Of Complement POS 1 '01 inna â\x80\x9cIndeed, trulyâ\x80\x9d VP Noun VBP 2 '01 anna â\x80\x9cThatâ\x80\x9d SBAR Noun IN 3 01 in â\x80\x9cIfâ\x80\x9d SBAR Verb IN 4 01 an â\x80\x9ctoâ\x80\x9d SBAR Verb IN Table 1: Diacritized particles and pseudo-verbs that, after orthographic normalization, have the equivalent surface form 0 an."", 'The distinctions in the ATB are linguistically justified, but complicate parsing.', 'Table 8a shows that the best model recovers SBAR at only 71.0% F1.', 'Diacritics can also be used to specify grammatical relations such as case and gender.', 'But diacritics are not present in unvocalized text, which is the standard form of, e.g., news media documents.3 VBD she added VP PUNC S VP VBP NP ...', 'VBD she added VP PUNC â\x80\x9c SBAR IN NP 0 NN.', 'Let us consider an example of ambiguity caused by devocalization.', 'Table 1 shows four words â\x80\x9c 0 Indeed NN Indeed Saddamwhose unvocalized surface forms 0 an are indistinguishable.', 'Whereas Arabic linguistic theory as Saddam (a) Reference (b) Stanford signs (1) and (2) to the class of pseudo verbs 01 +i J>1� inna and her sisters since they can beinflected, the ATB conventions treat (2) as a com plementizer, which means that it must be the head of SBAR.', 'Because these two words have identical complements, syntax rules are typically unhelpful for distinguishing between them.', 'This is especially true in the case of quotationsâ\x80\x94which are common in the ATBâ\x80\x94where (1) will follow a verb like (2) (Figure 1).', 'Even with vocalization, there are linguistic categories that are difficult to identify without semantic clues.', 'Two common cases are the attribu tive adjective and the process nominal _; maSdar, which can have a verbal reading.4 At tributive adjectives are hard because they are or- thographically identical to nominals; they are inflected for gender, number, case, and definiteness.', 'Moreover, they are used as substantives much 2 Unlike machine translation, constituency parsing is not significantly affected by variable word order.', 'However, when grammatical relations like subject and object are evaluated, parsing performance drops considerably (Green et al., 2009).', 'In particular, the decision to represent arguments in verb- initial clauses as VP internal makes VSO and VOS configurations difficult to distinguish.', 'Topicalization of NP subjects in SVO configurations causes confusion with VO (pro-drop).', '3 Techniques for automatic vocalization have been studied (Zitouni et al., 2006; Habash and Rambow, 2007).', 'However, the data sparsity induced by vocalization makes it difficult to train statistical models on corpora of the size of the ATB, so vocalizing and then parsing may well not help performance.', ""4 Traditional Arabic linguistic theory treats both of these types as subcategories of noun � '.i . Figure 1: The Stanford parser (Klein and Manning, 2002) is unable to recover the verbal reading of the unvocalized surface form 0 an (Table 1)."", 'more frequently than is done in English.', 'Process nominals name the action of the transitive or ditransitive verb from which they derive.', 'The verbal reading arises when the maSdar has an NP argument which, in vocalized text, is marked in the accusative case.', 'When the maSdar lacks a determiner, the constituent as a whole resem bles the ubiquitous annexation construct � ?f iDafa.', 'Gabbard and Kulick (2008) show that there is significant attachment ambiguity associated with iDafa, which occurs in 84.3% of the trees in our development set.', 'Figure 4 shows a constituent headed by a process nominal with an embedded adjective phrase.', 'All three models evaluated in this paper incorrectly analyze the constituent as iDafa; none of the models attach the attributive adjectives properly.', 'For parsing, the most challenging form of ambiguity occurs at the discourse level.', 'A defining characteristic of MSA is the prevalence of discourse markers to connect and subordinate words and phrases (Ryding, 2005).', 'Instead of offsetting new topics with punctuation, writers of MSA in sert connectives such as � wa and � fa to link new elements to both preceding clauses and the text as a whole.', 'As a result, Arabic sentences are usually long relative to English, especially after Length English (WSJ) Arabic (ATB) â\x89¤ 20 41.9% 33.7% â\x89¤ 40 92.4% 73.2% â\x89¤ 63 99.7% 92.6% â\x89¤ 70 99.9% 94.9% Table 2: Frequency distribution for sentence lengths in the WSJ (sections 2â\x80\x9323) and the ATB (p1â\x80\x933).', 'English parsing evaluations usually report results on sentences up to length 40.', 'Arabic sentences of up to length 63 would need to be.', 'evaluated to account for the same fraction of the data.', 'We propose a limit of 70 words for Arabic parsing evaluations.', 'ATB CTB6 Negra WSJ Trees 23449 28278 20602 43948 Word Typess 40972 45245 51272 46348 Tokens 738654 782541 355096 1046829 Tags 32 34 499 45 Phrasal Cats 22 26 325 27 Test OOV 16.8% 22.2% 30.5% 13.2% Per Sentence Table 4: Gross statistics for several different treebanks.', 'Test set OOV rate is computed using the following splits: ATB (Chiang et al., 2006); CTB6 (Huang and Harper, 2009); Negra (Dubey and Keller, 2003); English, sections 221 (train) and section 23 (test).', 'Table 3: Dev set frequencies for the two most significant discourse markers in Arabic are skewed toward analysis as a conjunction.', 'segmentation (Table 2).', 'The ATB gives several different analyses to these words to indicate different types of coordination.', 'But it conflates the coordinating and discourse separator functions of wa (<..4.b � �) into one analysis: conjunction(Table 3).', 'A better approach would be to distin guish between these cases, possibly by drawing on the vast linguistic work on Arabic connectives (AlBatal, 1990).', 'We show that noun-noun vs. discourse-level coordination ambiguity in Arabic is a significant source of parsing errors (Table 8c).', '3.1 Gross Statistics.', 'Linguistic intuitions like those in the previous section inform language-specific annotation choices.', 'The resulting structural differences between tree- banks can account for relative differences in parsing performance.', 'We compared the ATB5 to tree- banks for Chinese (CTB6), German (Negra), and English (WSJ) (Table 4).', 'The ATB is disadvantaged by having fewer trees with longer average 5 LDC A-E catalog numbers: LDC2008E61 (ATBp1v4), LDC2008E62 (ATBp2v3), and LDC2008E22 (ATBp3v3.1).', 'We map the ATB morphological analyses to the shortened â\x80\x9cBiesâ\x80\x9d tags for all experiments.', 'yields.6 But to its great advantage, it has a high ratio of non-terminals/terminals (μ Constituents / μ Length).', 'Evalb, the standard parsing metric, is biased toward such corpora (Sampson and Babarczy, 2003).', 'Also surprising is the low test set OOV rate given the possibility of morphological variation in Arabic.', 'In general, several gross corpus statistics favor the ATB, so other factors must contribute to parsing underperformance.', '3.2 Inter-annotator Agreement.', 'Annotation consistency is important in any supervised learning task.', 'In the initial release of the ATB, inter-annotator agreement was inferior to other LDC treebanks (Maamouri et al., 2008).', 'To improve agreement during the revision process, a dual-blind evaluation was performed in which 10% of the data was annotated by independent teams.', 'Maamouri et al.', '(2008) reported agreement between the teams (measured with Evalb) at 93.8% F1, the level of the CTB.', 'But Rehbein and van Genabith (2007) showed that Evalb should not be used as an indication of real differenceâ\x80\x94 or similarityâ\x80\x94between treebanks.', 'Instead, we extend the variation n-gram method of Dickinson (2005) to compare annotation error rates in the WSJ and ATB.', 'For a corpus C, let M be the set of tuples â\x88\x97n, l), where n is an n-gram with bracketing label l. If any n appears 6 Generative parsing performance is known to deteriorate with sentence length.', 'As a result, Habash et al.', '(2006) developed a technique for splitting and chunking long sentences.', 'In application settings, this may be a profitable strategy.', 'NN � .e NP NNP NP DTNNP NN � .e NP NP NNP NP Table 5: Evaluation of 100 randomly sampled variation nuclei types.', 'The samples from each corpus were independently evaluated.', 'The ATB has a much higher fraction of nuclei per tree, and a higher type-level error rate.', 'summit Sharm (a) Al-Sheikh summit Sharm (b) DTNNP Al-Sheikh in a corpus position without a bracketing label, then we also add â\x88\x97n, NIL) to M. We call the set of unique n-grams with multiple labels in M the variation nuclei of C. Bracketing variation can result from either annotation errors or linguistic ambiguity.', 'Human evaluation is one way to distinguish between the two cases.', 'Following Dickinson (2005), we randomly sampled 100 variation nuclei from each corpus and evaluated each sample for the presence of an annotation error.', 'The human evaluators were a non-native, fluent Arabic speaker (the first author) for the ATB and a native English speaker for the WSJ.7 Table 5 shows type- and token-level error rates for each corpus.', 'The 95% confidence intervals for type-level errors are (5580, 9440) for the ATB and (1400, 4610) for the WSJ.', 'The results clearly indicate increased variation in the ATB relative to the WSJ, but care should be taken in assessing the magnitude of the difference.', 'On the one hand, the type-level error rate is not calibrated for the number of n-grams in the sample.', 'At the same time, the n-gram error rate is sensitive to samples with extreme n-gram counts.', 'For example, one of the ATB samples was the determiner -"""" ; dhalikâ\x80\x9cthat.â\x80\x9d The sample occurred in 1507 corpus po sitions, and we found that the annotations were consistent.', 'If we remove this sample from the evaluation, then the ATB type-level error rises to only 37.4% while the n-gram error rate increases to 6.24%.', 'The number of ATB n-grams also falls below the WSJ sample size as the largest WSJ sample appeared in only 162 corpus positions.', '7 Unlike Dickinson (2005), we strip traces and only con-.', 'Figure 2: An ATB sample from the human evaluation.', 'The ATB annotation guidelines specify that proper nouns should be specified with a flat NP (a).', 'But the city name Sharm Al- Sheikh is also iDafa, hence the possibility for the incorrect annotation in (b).', 'We can use the preceding linguistic and annotation insights to build a manually annotated Arabic grammar in the manner of Klein and Manning (2003).', 'Manual annotation results in human in- terpretable grammars that can inform future tree- bank annotation decisions.', 'A simple lexicalized PCFG with second order Markovization gives relatively poor performance: 75.95% F1 on the test set.8 But this figure is surprisingly competitive with a recent state-of-the-art baseline (Table 7).', 'In our grammar, features are realized as annotations to basic category labels.', 'We start with noun features since written Arabic contains a very high proportion of NPs.', 'genitiveMark indicates recursive NPs with a indefinite nominal left daughter and an NP right daughter.', 'This is the form of recursive levels in iDafa constructs.', 'We also add an annotation for one-level iDafa (oneLevelIdafa) constructs since they make up more than 75% of the iDafa NPs in the ATB (Gabbard and Kulick, 2008).', 'For all other recursive NPs, we add a common annotation to the POS tag of the head (recursiveNPHead).', 'Base NPs are the other significant category of nominal phrases.', 'markBaseNP indicates these non-recursive nominal phrases.', 'This feature includes named entities, which the ATB marks with a flat NP node dominating an arbitrary number of NNP pre-terminal daughters (Figure 2).', 'For verbs we add two features.', 'First we mark any node that dominates (at any level) a verb sider POS tags when pre-terminals are the only intervening nodes between the nucleus and its bracketing (e.g., unaries, base NPs).', 'Since our objective is to compare distributions of bracketing discrepancies, we do not use heuristics to prune the set of nuclei.', '8 We use head-finding rules specified by a native speaker.', 'of Arabic.', 'This PCFG is incorporated into the Stanford Parser, a factored model that chooses a 1-best parse from the product of constituency and dependency parses.', 'termined by the category of the word that follows it.', 'Because conjunctions are elevated in the parse trees when they separate recursive constituents, we choose the right sister instead of the category of the next word.', 'We create equivalence classes for verb, noun, and adjective POS categories.', 'Table 6: Incremental dev set results for the manually annotated grammar (sentences of length â\x89¤ 70).', 'phrase (markContainsVerb).', 'This feature has a linguistic justification.', ""Historically, Arabic grammar has identified two sentences types: those that begin with a nominal (� '.i �u _.."", '), and thosethat begin with a verb (� ub..i �u _..', 'But for eign learners are often surprised by the verbless predications that are frequently used in Arabic.', 'Although these are technically nominal, they have become known as â\x80\x9cequationalâ\x80\x9d sentences.', 'mark- ContainsVerb is especially effective for distinguishing root S nodes of equational sentences.', 'We also mark all nodes that dominate an SVO configuration (containsSVO).', 'In MSA, SVO usually appears in non-matrix clauses.', 'Lexicalizing several POS tags improves performance.', 'splitIN captures the verb/preposition idioms that are widespread in Arabic.', 'Although this feature helps, we encounter one consequence of variable word order.', 'Unlike the WSJ corpus which has a high frequency of rules like VP â\x86\x92VB PP, Arabic verb phrases usually have lexi calized intervening nodes (e.g., NP subjects and direct objects).', 'For example, we might have VP â\x86\x92 VB NP PP, where the NP is the subject.', 'This annotation choice weakens splitIN.', 'We compare the manually annotated grammar, which we incorporate into the Stanford parser, to both the Berkeley (Petrov et al., 2006) and Bikel (Bikel, 2004) parsers.', 'All experiments use ATB parts 1â\x80\x933 divided according to the canonical split suggested by Chiang et al.', '(2006).', 'Preprocessing the raw trees improves parsing performance considerably.9 We first discard all trees dominated by X, which indicates errors and non-linguistic text.', 'At the phrasal level, we remove all function tags and traces.', 'We also collapse unary chains withidentical basic categories like NP â\x86\x92 NP.', 'The pre terminal morphological analyses are mapped to the shortened â\x80\x9cBiesâ\x80\x9d tags provided with the tree- bank.', 'Finally, we add â\x80\x9cDTâ\x80\x9d to the tags for definite nouns and adjectives (Kulick et al., 2006).', 'The orthographic normalization strategy we use is simple.10 In addition to removing all diacritics, we strip instances of taTweel J=J4.i, collapse variants of alif to bare alif,11 and map Ara bic punctuation characters to their Latin equivalents.', 'We retain segmentation markersâ\x80\x94which are consistent only in the vocalized section of the treebankâ\x80\x94to differentiate between e.g. � â\x80\x9ctheyâ\x80\x9d and � + â\x80\x9ctheir.â\x80\x9d Because we use the vocalized section, we must remove null pronoun markers.', 'In Table 7 we give results for several evaluation metrics.', 'Evalb is a Java re-implementation of the standard labeled precision/recall metric.12 The ATB gives all punctuation a single tag.', 'For parsing, this is a mistake, especially in the case of interrogatives.', 'splitPUNC restores the convention of the WSJ.', 'We also mark all tags that dominate a word with the feminine ending :: taa mar buuTa (markFeminine).', 'To differentiate between the coordinating and discourse separator functions of conjunctions (Table 3), we mark each CC with the label of its right sister (splitCC).', 'The intuition here is that the role of a discourse marker can usually be de 9 Both the corpus split and pre-processing code are avail-.', 'able at http://nlp.stanford.edu/projects/arabic.shtml.', '10 Other orthographic normalization schemes have been suggested for Arabic (Habash and Sadat, 2006), but we observe negligible parsing performance differences between these and the simple scheme used in this evaluation.', '11 taTweel (-) is an elongation character used in Arabic script to justify text.', 'It has no syntactic function.', 'Variants of alif are inconsistently used in Arabic texts.', 'For alif with hamza, normalization can be seen as another level of devocalization.', '12 For English, our Evalb implementation is identical to the most recent reference (EVALB20080701).', 'For Arabic we M o d e l S y s t e m L e n g t h L e a f A n c e s t o r Co rpu s Sent Exact E v a l b L P LR F1 T a g % B a s e l i n e 7 0 St an for d (v 1.', '6. 3) all G o l d P O S 7 0 0.7 91 0.825 358 0.7 73 0.818 358 0.8 02 0.836 452 80.', '37 79.', '36 79.', '86 78.', '92 77.', '72 78.', '32 81.', '07 80.', '27 80.', '67 95.', '58 95.', '49 99.', '95 B a s e li n e ( S e lf t a g ) 70 a l l B i k e l ( v 1 . 2 ) B a s e l i n e ( P r e t a g ) 7 0 a l l G o l d P O S 70 0.7 70 0.801 278 0.7 52 0.794 278 0.7 71 0.804 295 0.7 52 0.796 295 0.7 75 0.808 309 77.', '92 76.', '00 76.', '95 76.', '96 75.', '01 75.', '97 78.', '35 76.', '72 77.', '52 77.', '31 75.', '64 76.', '47 78.', '83 77.', '18 77.', '99 94.', '64 94.', '63 95.', '68 95.', '68 96.', '60 ( P e tr o v, 2 0 0 9 ) all B e r k e l e y ( S e p . 0 9 ) B a s e l i n e 7 0 a l l G o l d P O S 70 â\x80\x94 â\x80\x94 â\x80\x94 0 . 8 0 9 0.839 335 0 . 7 9', '0 . 8 3 1 0.859 496 76.', '40 75.', '30 75.', '85 82.', '32 81.', '63 81.', '97 81.', '43 80.', '73 81.', '08 84.', '37 84.', '21 84.', '29 â\x80\x94 95.', '07 95.', '02 99.', '87 Table 7: Test set results.', 'Maamouri et al.', '(2009b) evaluated the Bikel parser using the same ATB split, but only reported dev set results with gold POS tags for sentences of length â\x89¤ 40.', 'The Bikel GoldPOS configuration only supplies the gold POS tags; it does not force the parser to use them.', 'We are unaware of prior results for the Stanford parser.', 'F1 85 Berkeley 80 Stanford.', 'Bikel 75 training trees 5000 10000 15000 Figure 3: Dev set learning curves for sentence lengths â\x89¤ 70.', 'All three curves remain steep at the maximum training set size of 18818 trees.', 'The Leaf Ancestor metric measures the cost of transforming guess trees to the reference (Sampson and Babarczy, 2003).', 'It was developed in response to the non-terminal/terminal bias of Evalb, but Clegg and Shepherd (2005) showed that it is also a valuable diagnostic tool for trees with complex deep structures such as those found in the ATB.', 'For each terminal, the Leaf Ancestor metric extracts the shortest path to the root.', 'It then computes a normalized Levenshtein edit distance between the extracted chain and the reference.', 'The range of the score is between 0 and 1 (higher is better).', 'We report micro-averaged (whole corpus) and macro-averaged (per sentence) scores along add a constraint on the removal of punctuation, which has a single tag (PUNC) in the ATB.', 'Tokens tagged as PUNC are not discarded unless they consist entirely of punctuation.', 'with the number of exactly matching guess trees.', '5.1 Parsing Models.', 'The Stanford parser includes both the manually annotated grammar (§4) and an Arabic unknown word model with the following lexical features: 1.', 'Presence of the determiner J Al. 2.', 'Contains digits.', '3.', 'Ends with the feminine affix :: p. 4.', 'Various verbal (e.g., �, .::) and adjectival.', 'suffixes (e.g., �=) Other notable parameters are second order vertical Markovization and marking of unary rules.', 'Modifying the Berkeley parser for Arabic is straightforward.', 'After adding a ROOT node to all trees, we train a grammar using six split-and- merge cycles and no Markovization.', 'We use the default inference parameters.', 'Because the Bikel parser has been parameter- ized for Arabic by the LDC, we do not change the default model settings.', 'However, when we pre- tag the inputâ\x80\x94as is recommended for Englishâ\x80\x94 we notice a 0.57% F1 improvement.', 'We use the log-linear tagger of Toutanova et al.', '(2003), which gives 96.8% accuracy on the test set.', '5.2 Discussion.', 'The Berkeley parser gives state-of-the-art performance for all metrics.', 'Our baseline for all sentence lengths is 5.23% F1 higher than the best previous result.', 'The difference is due to more careful S-NOM NP NP NP VP VBG :: b NP restoring NP ADJP NN :: b NP NN NP NP ADJP DTJJ ADJP DTJJ NN :: b NP NP NP ADJP ADJP DTJJ J ..i NN :: b NP NP NP ADJP ADJP DTJJ NN _;� NP PRP DTJJ DTJJ J ..i _;� PRP J ..i NN _;� NP PRP DTJJ NN _;� NP PRP DTJJ J ..i role its constructive effective (b) Stanford (c) Berkeley (d) Bik el (a) Reference Figure 4: The constituent Restoring of its constructive and effective role parsed by the three different models (gold segmentation).', 'The ATB annotation distinguishes between verbal and nominal readings of maSdar process nominals.', 'Like verbs, maSdar takes arguments and assigns case to its objects, whereas it also demonstrates nominal characteristics by, e.g., taking determiners and heading iDafa (Fassi Fehri, 1993).', 'In the ATB, :: b astaâ\x80\x99adah is tagged 48 times as a noun and 9 times as verbal noun.', 'Consequently, all three parsers prefer the nominal reading.', 'Table 8b shows that verbal nouns are the hardest pre-terminal categories to identify.', 'None of the models attach the attributive adjectives correctly.', 'pre-processing.', 'However, the learning curves in Figure 3 show that the Berkeley parser does not exceed our manual grammar by as wide a margin as has been shown for other languages (Petrov, 2009).', 'Moreover, the Stanford parser achieves the most exact Leaf Ancestor matches and tagging accuracy that is only 0.1% below the Bikel model, which uses pre-tagged input.', 'In Figure 4 we show an example of variation between the parsing models.', 'We include a list of per-category results for selected phrasal labels, POS tags, and dependencies in Table 8.', 'The errors shown are from the Berkeley parser output, but they are representative of the other two parsing models.', '6 Joint Segmentation and Parsing.', 'Although the segmentation requirements for Arabic are not as extreme as those for Chinese, Arabic is written with certain cliticized prepositions, pronouns, and connectives connected to adjacent words.', 'Since these are distinct syntactic units, they are typically segmented.', 'The ATB segmentation scheme is one of many alternatives.', 'Until now, all evaluations of Arabic parsingâ\x80\x94including the experiments in the previous sectionâ\x80\x94have assumed gold segmentation.', 'But gold segmentation is not available in application settings, so a segmenter and parser are arranged in a pipeline.', 'Segmentation errors cascade into the parsing phase, placing an artificial limit on parsing performance.', 'Lattice parsing (Chappelier et al., 1999) is an alternative to a pipeline that prevents cascading errors by placing all segmentation options into the parse chart.', 'Recently, lattices have been used successfully in the parsing of Hebrew (Tsarfaty, 2006; Cohen and Smith, 2007), a Semitic language with similar properties to Arabic.', 'We extend the Stanford parser to accept pre-generated lattices, where each word is represented as a finite state automaton.', 'To combat the proliferation of parsing edges, we prune the lattices according to a hand-constructed lexicon of 31 clitics listed in the ATB annotation guidelines (Maamouri et al., 2009a).', 'Formally, for a lexicon L and segments I â\x88\x88 L, O â\x88\x88/ L, each word automaton accepts the language Iâ\x88\x97(O + I)Iâ\x88\x97.', 'Aside from adding a simple rule to correct alif deletion caused by the preposition J, no other language-specific processing is performed.', 'Our evaluation includes both weighted and un- weighted lattices.', 'We weight edges using a unigram language model estimated with Good- Turing smoothing.', 'Despite their simplicity, uni- gram weights have been shown as an effective feature in segmentation models (Dyer, 2009).13 The joint parser/segmenter is compared to a pipeline that uses MADA (v3.0), a state-of-the-art Arabic segmenter, configured to replicate ATB segmentation (Habash and Rambow, 2005).', 'MADA uses an ensemble of SVMs to first re-rank the output of a deterministic morphological analyzer.', 'For each 13 Of course, this weighting makes the PCFG an improper distribution.', 'However, in practice, unknown word models also make the distribution improper.', 'Parent Head Modif er Dir # gold F1 Label # gold F1 NP NP TAG R 946 0.54 ADJP 1216 59.45 S S S R 708 0.57 SBAR 2918 69.81 NP NP ADJ P R 803 0.64 FRAG 254 72.87 NP NP N P R 2907 0.66 VP 5507 78.83 NP NP SBA R R 1035 0.67 S 6579 78.91 NP NP P P R 2713 0.67 PP 7516 80.93 VP TAG P P R 3230 0.80 NP 34025 84.95 NP NP TAG L 805 0.85 ADVP 1093 90.64 VP TAG SBA R R 772 0.86 WHN P 787 96.00 S VP N P L 961 0.87 (a) Major phrasal categories (b) Major POS categories (c) Ten lowest scoring (Collins, 2003)-style dependencies occurring more than 700 times Table 8: Per category performance of the Berkeley parser on sentence lengths â\x89¤ 70 (dev set, gold segmentation).', '(a) Of the high frequency phrasal categories, ADJP and SBAR are the hardest to parse.', 'We showed in §2 that lexical ambiguity explains the underperformance of these categories.', '(b) POS tagging accuracy is lowest for maSdar verbal nouns (VBG,VN) and adjectives (e.g., JJ).', 'Richer tag sets have been suggested for modeling morphologically complex distinctions (Diab, 2007), but we find that linguistically rich tag sets do not help parsing.', '(c) Coordination ambiguity is shown in dependency scores by e.g., â\x88\x97SSS R) and â\x88\x97NP NP NP R).', 'â\x88\x97NP NP PP R) and â\x88\x97NP NP ADJP R) are both iDafa attachment.', 'input token, the segmentation is then performed deterministically given the 1-best analysis.', 'Since guess and gold trees may now have different yields, the question of evaluation is complex.', 'Cohen and Smith (2007) chose a metric like SParseval (Roark et al., 2006) that first aligns the trees and then penalizes segmentation errors with an edit-distance metric.', 'But we follow the more direct adaptation of Evalb suggested by Tsarfaty (2006), who viewed exact segmentation as the ultimate goal.', 'Therefore, we only score guess/gold pairs with identical character yields, a condition that allows us to measure parsing, tagging, and segmentation accuracy by ignoring whitespace.', 'Table 9 shows that MADA produces a high quality segmentation, and that the effect of cascading segmentation errors on parsing is only 1.92% F1.', 'However, MADA is language-specific and relies on manually constructed dictionaries.', 'Conversely, the lattice parser requires no linguistic resources and produces segmentations of comparable quality.', 'Nonetheless, parse quality is much lower in the joint model because a lattice is effectively a long sentence.', 'A cell in the bottom row of the parse chart is required for each potential whitespace boundary.', 'As we have said, parse quality decreases with sentence length.', 'Finally, we note that simple weighting gives nearly a 2% F1 improvement, whereas Goldberg and Tsarfaty (2008) found that unweighted lattices were more effective for Hebrew.', 'Table 9: Dev set results for sentences of length â\x89¤ 70.', 'Coverage indicates the fraction of hypotheses in which the character yield exactly matched the reference.', 'Each model was able to produce hypotheses for all input sentences.', 'In these experiments, the input lacks segmentation markers, hence the slightly different dev set baseline than in Table 6.', 'By establishing significantly higher parsing baselines, we have shown that Arabic parsing performance is not as poor as previously thought, but remains much lower than English.', 'We have described grammar state splits that significantly improve parsing performance, catalogued parsing errors, and quantified the effect of segmentation errors.', 'With a human evaluation we also showed that ATB inter-annotator agreement remains low relative to the WSJ corpus.', 'Our results suggest that current parsing models would benefit from better annotation consistency and enriched annotation in certain syntactic configurations.', 'Acknowledgments We thank Steven Bethard, Evan Rosen, and Karen Shiells for material contributions to this work.', 'We are also grateful to Markus Dickinson, Ali Farghaly, Nizar Habash, Seth Kulick, David McCloskey, Claude Reichard, Ryan Roth, and Reut Tsarfaty for constructive discussions.', 'The first author is supported by a National Defense Science and Engineering Graduate (NDSEG) fellowship.', 'This paper is based on work supported in part by DARPA through IBM.', 'The content does not necessarily reflect the views of the U.S. Government, and no official endorsement should be inferred.']",abstractive C02-1025,C02-1025,3,204,"They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W04-0213,W04-0213,2,3,The corpus was annoted with different linguitic information.,"We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive C10-1045,C10-1045,2,26,It is probably the first analysis of Arabic parsing of this kind.,"A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W04-0213,W04-0213,10,45,All the texts were annotated by two people.,"It is well-known that constituency parsing models designed for English often do not generalize easily to other languages and treebanks.1 Explanations for this phenomenon have included the relative informativeness of lexicalization (Dubey and Keller, 2003; Arun and Keller, 2005), insensitivity to morphology (Cowan and Collins, 2005; Tsarfaty and Sima’an, 2008), and the effect of variable word order (Collins et al., 1999).","['Better Arabic Parsing: Baselines, Evaluations, and Analysis', 'In this paper, we offer broad insight into the underperformance of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.', 'First, we identify sources of syntactic ambiguity understudied in the existing parsing literature.', 'Second, we show that although the Penn Arabic Treebank is similar to other tree- banks in gross statistical terms, annotation consistency remains problematic.', 'Third, we develop a human interpretable grammar that is competitive with a latent variable PCFG.', 'Fourth, we show how to build better models for three different parsers.', 'Finally, we show that in application settings, the absence of gold segmentation lowers parsing performance by 2â\x80\x935% F1.', 'It is well-known that constituency parsing models designed for English often do not generalize easily to other languages and treebanks.1 Explanations for this phenomenon have included the relative informativeness of lexicalization (Dubey and Keller, 2003; Arun and Keller, 2005), insensitivity to morphology (Cowan and Collins, 2005; Tsarfaty and Simaâ\x80\x99an, 2008), and the effect of variable word order (Collins et al., 1999).', 'Certainly these linguistic factors increase the difficulty of syntactic disambiguation.', 'Less frequently studied is the interplay among language, annotation choices, and parsing model design (Levy and Manning, 2003; Ku¨ bler, 2005).', '1 The apparent difficulty of adapting constituency models to non-configurational languages has been one motivation for dependency representations (HajicË\x87 and Zema´nek, 2004; Habash and Roth, 2009).', 'To investigate the influence of these factors, we analyze Modern Standard Arabic (henceforth MSA, or simply â\x80\x9cArabicâ\x80\x9d) because of the unusual opportunity it presents for comparison to English parsing results.', 'The Penn Arabic Treebank (ATB) syntactic guidelines (Maamouri et al., 2004) were purposefully borrowed without major modification from English (Marcus et al., 1993).', 'Further, Maamouri and Bies (2004) argued that the English guidelines generalize well to other languages.', 'But Arabic contains a variety of linguistic phenomena unseen in English.', 'Crucially, the conventional orthographic form of MSA text is unvocalized, a property that results in a deficient graphical representation.', 'For humans, this characteristic can impede the acquisition of literacy.', 'How do additional ambiguities caused by devocalization affect statistical learning?', 'How should the absence of vowels and syntactic markers influence annotation choices and grammar development?', 'Motivated by these questions, we significantly raise baselines for three existing parsing models through better grammar engineering.', 'Our analysis begins with a description of syntactic ambiguity in unvocalized MSA text (§2).', 'Next we show that the ATB is similar to other tree- banks in gross statistical terms, but that annotation consistency remains low relative to English (§3).', 'We then use linguistic and annotation insights to develop a manually annotated grammar for Arabic (§4).', 'To facilitate comparison with previous work, we exhaustively evaluate this grammar and two other parsing models when gold segmentation is assumed (§5).', 'Finally, we provide a realistic eval uation in which segmentation is performed both in a pipeline and jointly with parsing (§6).', 'We quantify error categories in both evaluation settings.', 'To our knowledge, ours is the first analysis of this kind for Arabic parsing.', 'Arabic is a morphologically rich language with a root-and-pattern system similar to other Semitic languages.', 'The basic word order is VSO, but SVO, VOS, and VO configurations are also possible.2 Nouns and verbs are created by selecting a consonantal root (usually triliteral or quadriliteral), which bears the semantic core, and adding affixes and diacritics.', 'Particles are uninflected.', ""Word Head Of Complement POS 1 '01 inna â\x80\x9cIndeed, trulyâ\x80\x9d VP Noun VBP 2 '01 anna â\x80\x9cThatâ\x80\x9d SBAR Noun IN 3 01 in â\x80\x9cIfâ\x80\x9d SBAR Verb IN 4 01 an â\x80\x9ctoâ\x80\x9d SBAR Verb IN Table 1: Diacritized particles and pseudo-verbs that, after orthographic normalization, have the equivalent surface form 0 an."", 'The distinctions in the ATB are linguistically justified, but complicate parsing.', 'Table 8a shows that the best model recovers SBAR at only 71.0% F1.', 'Diacritics can also be used to specify grammatical relations such as case and gender.', 'But diacritics are not present in unvocalized text, which is the standard form of, e.g., news media documents.3 VBD she added VP PUNC S VP VBP NP ...', 'VBD she added VP PUNC â\x80\x9c SBAR IN NP 0 NN.', 'Let us consider an example of ambiguity caused by devocalization.', 'Table 1 shows four words â\x80\x9c 0 Indeed NN Indeed Saddamwhose unvocalized surface forms 0 an are indistinguishable.', 'Whereas Arabic linguistic theory as Saddam (a) Reference (b) Stanford signs (1) and (2) to the class of pseudo verbs 01 +i J>1� inna and her sisters since they can beinflected, the ATB conventions treat (2) as a com plementizer, which means that it must be the head of SBAR.', 'Because these two words have identical complements, syntax rules are typically unhelpful for distinguishing between them.', 'This is especially true in the case of quotationsâ\x80\x94which are common in the ATBâ\x80\x94where (1) will follow a verb like (2) (Figure 1).', 'Even with vocalization, there are linguistic categories that are difficult to identify without semantic clues.', 'Two common cases are the attribu tive adjective and the process nominal _; maSdar, which can have a verbal reading.4 At tributive adjectives are hard because they are or- thographically identical to nominals; they are inflected for gender, number, case, and definiteness.', 'Moreover, they are used as substantives much 2 Unlike machine translation, constituency parsing is not significantly affected by variable word order.', 'However, when grammatical relations like subject and object are evaluated, parsing performance drops considerably (Green et al., 2009).', 'In particular, the decision to represent arguments in verb- initial clauses as VP internal makes VSO and VOS configurations difficult to distinguish.', 'Topicalization of NP subjects in SVO configurations causes confusion with VO (pro-drop).', '3 Techniques for automatic vocalization have been studied (Zitouni et al., 2006; Habash and Rambow, 2007).', 'However, the data sparsity induced by vocalization makes it difficult to train statistical models on corpora of the size of the ATB, so vocalizing and then parsing may well not help performance.', ""4 Traditional Arabic linguistic theory treats both of these types as subcategories of noun � '.i . Figure 1: The Stanford parser (Klein and Manning, 2002) is unable to recover the verbal reading of the unvocalized surface form 0 an (Table 1)."", 'more frequently than is done in English.', 'Process nominals name the action of the transitive or ditransitive verb from which they derive.', 'The verbal reading arises when the maSdar has an NP argument which, in vocalized text, is marked in the accusative case.', 'When the maSdar lacks a determiner, the constituent as a whole resem bles the ubiquitous annexation construct � ?f iDafa.', 'Gabbard and Kulick (2008) show that there is significant attachment ambiguity associated with iDafa, which occurs in 84.3% of the trees in our development set.', 'Figure 4 shows a constituent headed by a process nominal with an embedded adjective phrase.', 'All three models evaluated in this paper incorrectly analyze the constituent as iDafa; none of the models attach the attributive adjectives properly.', 'For parsing, the most challenging form of ambiguity occurs at the discourse level.', 'A defining characteristic of MSA is the prevalence of discourse markers to connect and subordinate words and phrases (Ryding, 2005).', 'Instead of offsetting new topics with punctuation, writers of MSA in sert connectives such as � wa and � fa to link new elements to both preceding clauses and the text as a whole.', 'As a result, Arabic sentences are usually long relative to English, especially after Length English (WSJ) Arabic (ATB) â\x89¤ 20 41.9% 33.7% â\x89¤ 40 92.4% 73.2% â\x89¤ 63 99.7% 92.6% â\x89¤ 70 99.9% 94.9% Table 2: Frequency distribution for sentence lengths in the WSJ (sections 2â\x80\x9323) and the ATB (p1â\x80\x933).', 'English parsing evaluations usually report results on sentences up to length 40.', 'Arabic sentences of up to length 63 would need to be.', 'evaluated to account for the same fraction of the data.', 'We propose a limit of 70 words for Arabic parsing evaluations.', 'ATB CTB6 Negra WSJ Trees 23449 28278 20602 43948 Word Typess 40972 45245 51272 46348 Tokens 738654 782541 355096 1046829 Tags 32 34 499 45 Phrasal Cats 22 26 325 27 Test OOV 16.8% 22.2% 30.5% 13.2% Per Sentence Table 4: Gross statistics for several different treebanks.', 'Test set OOV rate is computed using the following splits: ATB (Chiang et al., 2006); CTB6 (Huang and Harper, 2009); Negra (Dubey and Keller, 2003); English, sections 221 (train) and section 23 (test).', 'Table 3: Dev set frequencies for the two most significant discourse markers in Arabic are skewed toward analysis as a conjunction.', 'segmentation (Table 2).', 'The ATB gives several different analyses to these words to indicate different types of coordination.', 'But it conflates the coordinating and discourse separator functions of wa (<..4.b � �) into one analysis: conjunction(Table 3).', 'A better approach would be to distin guish between these cases, possibly by drawing on the vast linguistic work on Arabic connectives (AlBatal, 1990).', 'We show that noun-noun vs. discourse-level coordination ambiguity in Arabic is a significant source of parsing errors (Table 8c).', '3.1 Gross Statistics.', 'Linguistic intuitions like those in the previous section inform language-specific annotation choices.', 'The resulting structural differences between tree- banks can account for relative differences in parsing performance.', 'We compared the ATB5 to tree- banks for Chinese (CTB6), German (Negra), and English (WSJ) (Table 4).', 'The ATB is disadvantaged by having fewer trees with longer average 5 LDC A-E catalog numbers: LDC2008E61 (ATBp1v4), LDC2008E62 (ATBp2v3), and LDC2008E22 (ATBp3v3.1).', 'We map the ATB morphological analyses to the shortened â\x80\x9cBiesâ\x80\x9d tags for all experiments.', 'yields.6 But to its great advantage, it has a high ratio of non-terminals/terminals (μ Constituents / μ Length).', 'Evalb, the standard parsing metric, is biased toward such corpora (Sampson and Babarczy, 2003).', 'Also surprising is the low test set OOV rate given the possibility of morphological variation in Arabic.', 'In general, several gross corpus statistics favor the ATB, so other factors must contribute to parsing underperformance.', '3.2 Inter-annotator Agreement.', 'Annotation consistency is important in any supervised learning task.', 'In the initial release of the ATB, inter-annotator agreement was inferior to other LDC treebanks (Maamouri et al., 2008).', 'To improve agreement during the revision process, a dual-blind evaluation was performed in which 10% of the data was annotated by independent teams.', 'Maamouri et al.', '(2008) reported agreement between the teams (measured with Evalb) at 93.8% F1, the level of the CTB.', 'But Rehbein and van Genabith (2007) showed that Evalb should not be used as an indication of real differenceâ\x80\x94 or similarityâ\x80\x94between treebanks.', 'Instead, we extend the variation n-gram method of Dickinson (2005) to compare annotation error rates in the WSJ and ATB.', 'For a corpus C, let M be the set of tuples â\x88\x97n, l), where n is an n-gram with bracketing label l. If any n appears 6 Generative parsing performance is known to deteriorate with sentence length.', 'As a result, Habash et al.', '(2006) developed a technique for splitting and chunking long sentences.', 'In application settings, this may be a profitable strategy.', 'NN � .e NP NNP NP DTNNP NN � .e NP NP NNP NP Table 5: Evaluation of 100 randomly sampled variation nuclei types.', 'The samples from each corpus were independently evaluated.', 'The ATB has a much higher fraction of nuclei per tree, and a higher type-level error rate.', 'summit Sharm (a) Al-Sheikh summit Sharm (b) DTNNP Al-Sheikh in a corpus position without a bracketing label, then we also add â\x88\x97n, NIL) to M. We call the set of unique n-grams with multiple labels in M the variation nuclei of C. Bracketing variation can result from either annotation errors or linguistic ambiguity.', 'Human evaluation is one way to distinguish between the two cases.', 'Following Dickinson (2005), we randomly sampled 100 variation nuclei from each corpus and evaluated each sample for the presence of an annotation error.', 'The human evaluators were a non-native, fluent Arabic speaker (the first author) for the ATB and a native English speaker for the WSJ.7 Table 5 shows type- and token-level error rates for each corpus.', 'The 95% confidence intervals for type-level errors are (5580, 9440) for the ATB and (1400, 4610) for the WSJ.', 'The results clearly indicate increased variation in the ATB relative to the WSJ, but care should be taken in assessing the magnitude of the difference.', 'On the one hand, the type-level error rate is not calibrated for the number of n-grams in the sample.', 'At the same time, the n-gram error rate is sensitive to samples with extreme n-gram counts.', 'For example, one of the ATB samples was the determiner -"""" ; dhalikâ\x80\x9cthat.â\x80\x9d The sample occurred in 1507 corpus po sitions, and we found that the annotations were consistent.', 'If we remove this sample from the evaluation, then the ATB type-level error rises to only 37.4% while the n-gram error rate increases to 6.24%.', 'The number of ATB n-grams also falls below the WSJ sample size as the largest WSJ sample appeared in only 162 corpus positions.', '7 Unlike Dickinson (2005), we strip traces and only con-.', 'Figure 2: An ATB sample from the human evaluation.', 'The ATB annotation guidelines specify that proper nouns should be specified with a flat NP (a).', 'But the city name Sharm Al- Sheikh is also iDafa, hence the possibility for the incorrect annotation in (b).', 'We can use the preceding linguistic and annotation insights to build a manually annotated Arabic grammar in the manner of Klein and Manning (2003).', 'Manual annotation results in human in- terpretable grammars that can inform future tree- bank annotation decisions.', 'A simple lexicalized PCFG with second order Markovization gives relatively poor performance: 75.95% F1 on the test set.8 But this figure is surprisingly competitive with a recent state-of-the-art baseline (Table 7).', 'In our grammar, features are realized as annotations to basic category labels.', 'We start with noun features since written Arabic contains a very high proportion of NPs.', 'genitiveMark indicates recursive NPs with a indefinite nominal left daughter and an NP right daughter.', 'This is the form of recursive levels in iDafa constructs.', 'We also add an annotation for one-level iDafa (oneLevelIdafa) constructs since they make up more than 75% of the iDafa NPs in the ATB (Gabbard and Kulick, 2008).', 'For all other recursive NPs, we add a common annotation to the POS tag of the head (recursiveNPHead).', 'Base NPs are the other significant category of nominal phrases.', 'markBaseNP indicates these non-recursive nominal phrases.', 'This feature includes named entities, which the ATB marks with a flat NP node dominating an arbitrary number of NNP pre-terminal daughters (Figure 2).', 'For verbs we add two features.', 'First we mark any node that dominates (at any level) a verb sider POS tags when pre-terminals are the only intervening nodes between the nucleus and its bracketing (e.g., unaries, base NPs).', 'Since our objective is to compare distributions of bracketing discrepancies, we do not use heuristics to prune the set of nuclei.', '8 We use head-finding rules specified by a native speaker.', 'of Arabic.', 'This PCFG is incorporated into the Stanford Parser, a factored model that chooses a 1-best parse from the product of constituency and dependency parses.', 'termined by the category of the word that follows it.', 'Because conjunctions are elevated in the parse trees when they separate recursive constituents, we choose the right sister instead of the category of the next word.', 'We create equivalence classes for verb, noun, and adjective POS categories.', 'Table 6: Incremental dev set results for the manually annotated grammar (sentences of length â\x89¤ 70).', 'phrase (markContainsVerb).', 'This feature has a linguistic justification.', ""Historically, Arabic grammar has identified two sentences types: those that begin with a nominal (� '.i �u _.."", '), and thosethat begin with a verb (� ub..i �u _..', 'But for eign learners are often surprised by the verbless predications that are frequently used in Arabic.', 'Although these are technically nominal, they have become known as â\x80\x9cequationalâ\x80\x9d sentences.', 'mark- ContainsVerb is especially effective for distinguishing root S nodes of equational sentences.', 'We also mark all nodes that dominate an SVO configuration (containsSVO).', 'In MSA, SVO usually appears in non-matrix clauses.', 'Lexicalizing several POS tags improves performance.', 'splitIN captures the verb/preposition idioms that are widespread in Arabic.', 'Although this feature helps, we encounter one consequence of variable word order.', 'Unlike the WSJ corpus which has a high frequency of rules like VP â\x86\x92VB PP, Arabic verb phrases usually have lexi calized intervening nodes (e.g., NP subjects and direct objects).', 'For example, we might have VP â\x86\x92 VB NP PP, where the NP is the subject.', 'This annotation choice weakens splitIN.', 'We compare the manually annotated grammar, which we incorporate into the Stanford parser, to both the Berkeley (Petrov et al., 2006) and Bikel (Bikel, 2004) parsers.', 'All experiments use ATB parts 1â\x80\x933 divided according to the canonical split suggested by Chiang et al.', '(2006).', 'Preprocessing the raw trees improves parsing performance considerably.9 We first discard all trees dominated by X, which indicates errors and non-linguistic text.', 'At the phrasal level, we remove all function tags and traces.', 'We also collapse unary chains withidentical basic categories like NP â\x86\x92 NP.', 'The pre terminal morphological analyses are mapped to the shortened â\x80\x9cBiesâ\x80\x9d tags provided with the tree- bank.', 'Finally, we add â\x80\x9cDTâ\x80\x9d to the tags for definite nouns and adjectives (Kulick et al., 2006).', 'The orthographic normalization strategy we use is simple.10 In addition to removing all diacritics, we strip instances of taTweel J=J4.i, collapse variants of alif to bare alif,11 and map Ara bic punctuation characters to their Latin equivalents.', 'We retain segmentation markersâ\x80\x94which are consistent only in the vocalized section of the treebankâ\x80\x94to differentiate between e.g. � â\x80\x9ctheyâ\x80\x9d and � + â\x80\x9ctheir.â\x80\x9d Because we use the vocalized section, we must remove null pronoun markers.', 'In Table 7 we give results for several evaluation metrics.', 'Evalb is a Java re-implementation of the standard labeled precision/recall metric.12 The ATB gives all punctuation a single tag.', 'For parsing, this is a mistake, especially in the case of interrogatives.', 'splitPUNC restores the convention of the WSJ.', 'We also mark all tags that dominate a word with the feminine ending :: taa mar buuTa (markFeminine).', 'To differentiate between the coordinating and discourse separator functions of conjunctions (Table 3), we mark each CC with the label of its right sister (splitCC).', 'The intuition here is that the role of a discourse marker can usually be de 9 Both the corpus split and pre-processing code are avail-.', 'able at http://nlp.stanford.edu/projects/arabic.shtml.', '10 Other orthographic normalization schemes have been suggested for Arabic (Habash and Sadat, 2006), but we observe negligible parsing performance differences between these and the simple scheme used in this evaluation.', '11 taTweel (-) is an elongation character used in Arabic script to justify text.', 'It has no syntactic function.', 'Variants of alif are inconsistently used in Arabic texts.', 'For alif with hamza, normalization can be seen as another level of devocalization.', '12 For English, our Evalb implementation is identical to the most recent reference (EVALB20080701).', 'For Arabic we M o d e l S y s t e m L e n g t h L e a f A n c e s t o r Co rpu s Sent Exact E v a l b L P LR F1 T a g % B a s e l i n e 7 0 St an for d (v 1.', '6. 3) all G o l d P O S 7 0 0.7 91 0.825 358 0.7 73 0.818 358 0.8 02 0.836 452 80.', '37 79.', '36 79.', '86 78.', '92 77.', '72 78.', '32 81.', '07 80.', '27 80.', '67 95.', '58 95.', '49 99.', '95 B a s e li n e ( S e lf t a g ) 70 a l l B i k e l ( v 1 . 2 ) B a s e l i n e ( P r e t a g ) 7 0 a l l G o l d P O S 70 0.7 70 0.801 278 0.7 52 0.794 278 0.7 71 0.804 295 0.7 52 0.796 295 0.7 75 0.808 309 77.', '92 76.', '00 76.', '95 76.', '96 75.', '01 75.', '97 78.', '35 76.', '72 77.', '52 77.', '31 75.', '64 76.', '47 78.', '83 77.', '18 77.', '99 94.', '64 94.', '63 95.', '68 95.', '68 96.', '60 ( P e tr o v, 2 0 0 9 ) all B e r k e l e y ( S e p . 0 9 ) B a s e l i n e 7 0 a l l G o l d P O S 70 â\x80\x94 â\x80\x94 â\x80\x94 0 . 8 0 9 0.839 335 0 . 7 9', '0 . 8 3 1 0.859 496 76.', '40 75.', '30 75.', '85 82.', '32 81.', '63 81.', '97 81.', '43 80.', '73 81.', '08 84.', '37 84.', '21 84.', '29 â\x80\x94 95.', '07 95.', '02 99.', '87 Table 7: Test set results.', 'Maamouri et al.', '(2009b) evaluated the Bikel parser using the same ATB split, but only reported dev set results with gold POS tags for sentences of length â\x89¤ 40.', 'The Bikel GoldPOS configuration only supplies the gold POS tags; it does not force the parser to use them.', 'We are unaware of prior results for the Stanford parser.', 'F1 85 Berkeley 80 Stanford.', 'Bikel 75 training trees 5000 10000 15000 Figure 3: Dev set learning curves for sentence lengths â\x89¤ 70.', 'All three curves remain steep at the maximum training set size of 18818 trees.', 'The Leaf Ancestor metric measures the cost of transforming guess trees to the reference (Sampson and Babarczy, 2003).', 'It was developed in response to the non-terminal/terminal bias of Evalb, but Clegg and Shepherd (2005) showed that it is also a valuable diagnostic tool for trees with complex deep structures such as those found in the ATB.', 'For each terminal, the Leaf Ancestor metric extracts the shortest path to the root.', 'It then computes a normalized Levenshtein edit distance between the extracted chain and the reference.', 'The range of the score is between 0 and 1 (higher is better).', 'We report micro-averaged (whole corpus) and macro-averaged (per sentence) scores along add a constraint on the removal of punctuation, which has a single tag (PUNC) in the ATB.', 'Tokens tagged as PUNC are not discarded unless they consist entirely of punctuation.', 'with the number of exactly matching guess trees.', '5.1 Parsing Models.', 'The Stanford parser includes both the manually annotated grammar (§4) and an Arabic unknown word model with the following lexical features: 1.', 'Presence of the determiner J Al. 2.', 'Contains digits.', '3.', 'Ends with the feminine affix :: p. 4.', 'Various verbal (e.g., �, .::) and adjectival.', 'suffixes (e.g., �=) Other notable parameters are second order vertical Markovization and marking of unary rules.', 'Modifying the Berkeley parser for Arabic is straightforward.', 'After adding a ROOT node to all trees, we train a grammar using six split-and- merge cycles and no Markovization.', 'We use the default inference parameters.', 'Because the Bikel parser has been parameter- ized for Arabic by the LDC, we do not change the default model settings.', 'However, when we pre- tag the inputâ\x80\x94as is recommended for Englishâ\x80\x94 we notice a 0.57% F1 improvement.', 'We use the log-linear tagger of Toutanova et al.', '(2003), which gives 96.8% accuracy on the test set.', '5.2 Discussion.', 'The Berkeley parser gives state-of-the-art performance for all metrics.', 'Our baseline for all sentence lengths is 5.23% F1 higher than the best previous result.', 'The difference is due to more careful S-NOM NP NP NP VP VBG :: b NP restoring NP ADJP NN :: b NP NN NP NP ADJP DTJJ ADJP DTJJ NN :: b NP NP NP ADJP ADJP DTJJ J ..i NN :: b NP NP NP ADJP ADJP DTJJ NN _;� NP PRP DTJJ DTJJ J ..i _;� PRP J ..i NN _;� NP PRP DTJJ NN _;� NP PRP DTJJ J ..i role its constructive effective (b) Stanford (c) Berkeley (d) Bik el (a) Reference Figure 4: The constituent Restoring of its constructive and effective role parsed by the three different models (gold segmentation).', 'The ATB annotation distinguishes between verbal and nominal readings of maSdar process nominals.', 'Like verbs, maSdar takes arguments and assigns case to its objects, whereas it also demonstrates nominal characteristics by, e.g., taking determiners and heading iDafa (Fassi Fehri, 1993).', 'In the ATB, :: b astaâ\x80\x99adah is tagged 48 times as a noun and 9 times as verbal noun.', 'Consequently, all three parsers prefer the nominal reading.', 'Table 8b shows that verbal nouns are the hardest pre-terminal categories to identify.', 'None of the models attach the attributive adjectives correctly.', 'pre-processing.', 'However, the learning curves in Figure 3 show that the Berkeley parser does not exceed our manual grammar by as wide a margin as has been shown for other languages (Petrov, 2009).', 'Moreover, the Stanford parser achieves the most exact Leaf Ancestor matches and tagging accuracy that is only 0.1% below the Bikel model, which uses pre-tagged input.', 'In Figure 4 we show an example of variation between the parsing models.', 'We include a list of per-category results for selected phrasal labels, POS tags, and dependencies in Table 8.', 'The errors shown are from the Berkeley parser output, but they are representative of the other two parsing models.', '6 Joint Segmentation and Parsing.', 'Although the segmentation requirements for Arabic are not as extreme as those for Chinese, Arabic is written with certain cliticized prepositions, pronouns, and connectives connected to adjacent words.', 'Since these are distinct syntactic units, they are typically segmented.', 'The ATB segmentation scheme is one of many alternatives.', 'Until now, all evaluations of Arabic parsingâ\x80\x94including the experiments in the previous sectionâ\x80\x94have assumed gold segmentation.', 'But gold segmentation is not available in application settings, so a segmenter and parser are arranged in a pipeline.', 'Segmentation errors cascade into the parsing phase, placing an artificial limit on parsing performance.', 'Lattice parsing (Chappelier et al., 1999) is an alternative to a pipeline that prevents cascading errors by placing all segmentation options into the parse chart.', 'Recently, lattices have been used successfully in the parsing of Hebrew (Tsarfaty, 2006; Cohen and Smith, 2007), a Semitic language with similar properties to Arabic.', 'We extend the Stanford parser to accept pre-generated lattices, where each word is represented as a finite state automaton.', 'To combat the proliferation of parsing edges, we prune the lattices according to a hand-constructed lexicon of 31 clitics listed in the ATB annotation guidelines (Maamouri et al., 2009a).', 'Formally, for a lexicon L and segments I â\x88\x88 L, O â\x88\x88/ L, each word automaton accepts the language Iâ\x88\x97(O + I)Iâ\x88\x97.', 'Aside from adding a simple rule to correct alif deletion caused by the preposition J, no other language-specific processing is performed.', 'Our evaluation includes both weighted and un- weighted lattices.', 'We weight edges using a unigram language model estimated with Good- Turing smoothing.', 'Despite their simplicity, uni- gram weights have been shown as an effective feature in segmentation models (Dyer, 2009).13 The joint parser/segmenter is compared to a pipeline that uses MADA (v3.0), a state-of-the-art Arabic segmenter, configured to replicate ATB segmentation (Habash and Rambow, 2005).', 'MADA uses an ensemble of SVMs to first re-rank the output of a deterministic morphological analyzer.', 'For each 13 Of course, this weighting makes the PCFG an improper distribution.', 'However, in practice, unknown word models also make the distribution improper.', 'Parent Head Modif er Dir # gold F1 Label # gold F1 NP NP TAG R 946 0.54 ADJP 1216 59.45 S S S R 708 0.57 SBAR 2918 69.81 NP NP ADJ P R 803 0.64 FRAG 254 72.87 NP NP N P R 2907 0.66 VP 5507 78.83 NP NP SBA R R 1035 0.67 S 6579 78.91 NP NP P P R 2713 0.67 PP 7516 80.93 VP TAG P P R 3230 0.80 NP 34025 84.95 NP NP TAG L 805 0.85 ADVP 1093 90.64 VP TAG SBA R R 772 0.86 WHN P 787 96.00 S VP N P L 961 0.87 (a) Major phrasal categories (b) Major POS categories (c) Ten lowest scoring (Collins, 2003)-style dependencies occurring more than 700 times Table 8: Per category performance of the Berkeley parser on sentence lengths â\x89¤ 70 (dev set, gold segmentation).', '(a) Of the high frequency phrasal categories, ADJP and SBAR are the hardest to parse.', 'We showed in §2 that lexical ambiguity explains the underperformance of these categories.', '(b) POS tagging accuracy is lowest for maSdar verbal nouns (VBG,VN) and adjectives (e.g., JJ).', 'Richer tag sets have been suggested for modeling morphologically complex distinctions (Diab, 2007), but we find that linguistically rich tag sets do not help parsing.', '(c) Coordination ambiguity is shown in dependency scores by e.g., â\x88\x97SSS R) and â\x88\x97NP NP NP R).', 'â\x88\x97NP NP PP R) and â\x88\x97NP NP ADJP R) are both iDafa attachment.', 'input token, the segmentation is then performed deterministically given the 1-best analysis.', 'Since guess and gold trees may now have different yields, the question of evaluation is complex.', 'Cohen and Smith (2007) chose a metric like SParseval (Roark et al., 2006) that first aligns the trees and then penalizes segmentation errors with an edit-distance metric.', 'But we follow the more direct adaptation of Evalb suggested by Tsarfaty (2006), who viewed exact segmentation as the ultimate goal.', 'Therefore, we only score guess/gold pairs with identical character yields, a condition that allows us to measure parsing, tagging, and segmentation accuracy by ignoring whitespace.', 'Table 9 shows that MADA produces a high quality segmentation, and that the effect of cascading segmentation errors on parsing is only 1.92% F1.', 'However, MADA is language-specific and relies on manually constructed dictionaries.', 'Conversely, the lattice parser requires no linguistic resources and produces segmentations of comparable quality.', 'Nonetheless, parse quality is much lower in the joint model because a lattice is effectively a long sentence.', 'A cell in the bottom row of the parse chart is required for each potential whitespace boundary.', 'As we have said, parse quality decreases with sentence length.', 'Finally, we note that simple weighting gives nearly a 2% F1 improvement, whereas Goldberg and Tsarfaty (2008) found that unweighted lattices were more effective for Hebrew.', 'Table 9: Dev set results for sentences of length â\x89¤ 70.', 'Coverage indicates the fraction of hypotheses in which the character yield exactly matched the reference.', 'Each model was able to produce hypotheses for all input sentences.', 'In these experiments, the input lacks segmentation markers, hence the slightly different dev set baseline than in Table 6.', 'By establishing significantly higher parsing baselines, we have shown that Arabic parsing performance is not as poor as previously thought, but remains much lower than English.', 'We have described grammar state splits that significantly improve parsing performance, catalogued parsing errors, and quantified the effect of segmentation errors.', 'With a human evaluation we also showed that ATB inter-annotator agreement remains low relative to the WSJ corpus.', 'Our results suggest that current parsing models would benefit from better annotation consistency and enriched annotation in certain syntactic configurations.', 'Acknowledgments We thank Steven Bethard, Evan Rosen, and Karen Shiells for material contributions to this work.', 'We are also grateful to Markus Dickinson, Ali Farghaly, Nizar Habash, Seth Kulick, David McCloskey, Claude Reichard, Ryan Roth, and Reut Tsarfaty for constructive discussions.', 'The first author is supported by a National Defense Science and Engineering Graduate (NDSEG) fellowship.', 'This paper is based on work supported in part by DARPA through IBM.', 'The content does not necessarily reflect the views of the U.S. Government, and no official endorsement should be inferred.']",abstractive P87-1015_swastika,P87-1015,2,2,"They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.","A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive N04-1038,N04-1038,4,129,"However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.","Local features are features that are based on neighboring tokens, as well as the token itself.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive C00-2123,C00-2123,1,1,The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).,This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W06-3114_sweta,W06-3114,3,172,"Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.","Local features are features that are based on neighboring tokens, as well as the token itself.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive D10-1044_swastika,D10-1044,3,3,"They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.","We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W06-3114_swastika,W06-3114,4,173,The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.,"Local features are features that are based on neighboring tokens, as well as the token itself.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive D10-1044_swastika,D10-1044,1,1,"Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.",This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive C02-1025,C02-1025,5,14,Their results show that their high performance NER use less training data than other systems.,"These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive C00-2123,C00-2123,7,193,The approach assumes that the word reordering is restricted to a few positions in the source sentence.,"By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive C00-2123,C00-2123,3,165,A beam search concept is applied as in speech recognition.,"We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive D10-1044_swastika,D10-1044,6,146,The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.,"These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive W04-0213,W04-0213,7,19,"Nevertheless, only a part of this corpus (10 texts), which the authors name ""core corpus"", is annotated with all this information.","Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive J96-3004,J96-3004,4,69,"they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.","Local features are features that are based on neighboring tokens, as well as the token itself.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive D10-1044_swastika,D10-1044,5,145,Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.,Global features are extracted from other occurrences of the same token in the whole document.,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive D10-1083,D10-1083,8,238,"The resulting model is compact, efficiently learnable and linguistically expressive.","Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive P08-1043_swastika,P08-1043,4,189,"They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.","Local features are features that are based on neighboring tokens, as well as the token itself.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive C10-1045,C10-1045,5,24,The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.,"These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W99-0623_vardha,W99-0623,6,143,Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.,"These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive N04-1038,N04-1038,1,2,"In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.",This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive I05-5011,I05-5011,8,159,"In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.","Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W99-0623_vardha,W99-0623,5,72,The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.,Global features are extracted from other occurrences of the same token in the whole document.,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive P87-1015_swastika,P87-1015,1,1,Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.,This paper presents a maximum entropy-based named entity recognizer (NER).,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive C02-1025,C02-1025,2,6,"NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.","A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W06-3114_swastika,W06-3114,6,176,Human judges also pointed out difficulties with the evaluation of long sentences.,"These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",extractive I05-5011,I05-5011,6,18,Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.,"These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive C02-1025,C02-1025,4,63,They have made use of local and global features to deal with the instances of same token in a document.,Global features are extracted from other occurrences of the same token in the whole document.,"['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive W04-0213,W04-0213,1,3,"This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.","A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.","['Named Entity Recognition: A Maximum Entropy Approach Using Global Information', 'This paper presents a maximum entropy-based named entity recognizer (NER).', 'It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.', 'Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.', 'In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.', 'Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).', 'A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.', 'In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.', 'Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999).', 'We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.', 'By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.', 'We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).', 'As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.', 'The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors).', 'These results are achieved by training on the official MUC6 and MUC7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC6 or MUC7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borth- wick, 1999).', 'We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first â\x80\x9cPresident George Bushâ\x80\x9d then â\x80\x9cBushâ\x80\x9d).', 'As such, global information from the whole context of a document is important to more accurately recognize named entities.', 'Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.', 'Recently, statistical NERs have achieved results that are comparable to hand-coded systems.', ""Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance."", ""MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data."", 'MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.', 'MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).', 'Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.', 'MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.', 'By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).', 'Mikheev et al.', '(1998) did make use of information from the whole document.', 'However, their system is a hybrid of hand-coded rules and machine learning methods.', 'Another attempt at using global information can be found in (Borthwick, 1999).', 'He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.', 'Reference resolution involves finding words that co-refer to the same entity.', 'In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.', 'MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.', 'This process is repeated 5 times by rotating the data appropriately.', 'Finally, the concatenated 5 * 20% output is used to train the reference resolution component.', ""We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier."", 'On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.', 'both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).', 'On the MUC6 data, Bikel et al.', '(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.', 'Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.', 'The system described in this paper is similar to the MENE system of (Borthwick, 1999).', 'It uses a maximum entropy framework and classifies each word given its features.', 'Each name class is subdivided into 4 sub-classes, i.e., N begin, N continue, N end, and N unique.', 'Hence, there is a total of 29 classes (7 name classes 4 sub-classes 1 not-a-name class).', '3.1 Maximum Entropy.', 'The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.', 'Such constraints are derived from training data, expressing some relationship between features and outcome.', 'The probability distribution that satisfies the above property is the one with the highest entropy.', 'It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.', 'In addition, each feature function is a binary function.', 'For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).', 'This is an iterative method that improves the estimation of the parameters at each iteration.', 'We have used the Java-based opennlp maximum entropy package1.', 'In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI.', 'However, 1 http://maxent.sourceforge.net 3.2 Testing.', 'During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique).', 'To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.', 'The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.', 'A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.', 'The features we used can be divided into 2 classes: local and global.', 'Local features are features that are based on neighboring tokens, as well as the token itself.', 'Global features are extracted from other occurrences of the same token in the whole document.', ""The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999)."", 'However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).', 'This might be because our features are more comprehensive than those used by Borthwick.', 'In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used.', 'In the maximum entropy framework, there is no such constraint.', 'Multiple features can be used for the same token.', 'Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.', 'We group the features used into feature groups.', 'Each feature group can be made up of many binary features.', 'For each token , zero, one, or more of the features in each feature group are set to 1.', '4.1 Local Features.', 'The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens.', 'This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.', 'Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).', 'The zone to which a token belongs is used as a feature.', 'For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD).', 'Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.', 'Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.', 'If it is made up of all capital letters, then (allCaps, zone) is set to 1.', 'If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1.', 'A token that is allCaps will also be initCaps.', 'This group consists of (3 total number of possible zones) features.', 'Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1.', 'For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.', 'If the token is the first word of a sentence, then this feature is set to 1.', 'Otherwise, it is set to 0.', 'Lexicon Feature: The string of the token is used as a feature.', 'This group contains a large number of features (one for each token string present in the training data).', 'At most one feature in this group will be set to 1.', 'If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.', 'Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.', 'If is not initCaps, then (not-initCaps, ) is set to 1.', 'Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1.', 'This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).', 'Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.', 'Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task.', 'The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999).', 'The sources of our dictionaries are listed in Table 2.', 'For all lists except locations, the lists are processed into a list of tokens (unigrams).', 'Location list is processed into a list of unigrams and bigrams (e.g., New York).', 'For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams.', 'A list of words occurring more than 10 times in the training data is also collected (commonWords).', 'Only tokens with initCaps not found in commonWords are tested against each list in Table 2.', 'If they are found in a list, then a feature for that list will be set to 1.', 'For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.', 'Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.', 'For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.', 'Month Names, Days of the Week, and Numbers: If is initCaps and is one of January, February, . . .', ', December, then the feature MonthName is set to 1.', 'If is one of Monday, Tuesday, . . .', ', Sun day, then the feature DayOfTheWeek is set to 1.', 'If is a number string (such as one, two, etc), then the feature NumberString is set to 1.', 'Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix.', 'Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data.', 'For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data.', 'Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the â\x80\x9cfrequencyâ\x80\x9d of Corp. is 2).', 'The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.', 'A Person-Prefix-List is compiled in an analogous way.', 'For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.', 'If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.', 'Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.', '4.2 Global Features.', 'Context from the whole document can be important in classifying a named entity.', 'A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.', 'Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).', 'We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned.', 'For example: McCann initiated a new global system.', '(1) CEO of McCann . . .', '(2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . .', '(3)In sentence (1), McCann can be a person or an orga nization.', 'Sentence (2) and (3) help to disambiguate one way or the other.', 'If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.', 'The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.', 'For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own.', 'For example, in the sentence that starts with â\x80\x9cBush put a freeze on . . .', 'â\x80\x9d, because Bush is the first word, the initial caps might be due to its position (as in â\x80\x9cThey put a freeze on . . .', 'â\x80\x9d).', 'If somewhere else in the document we see â\x80\x9crestrictions put in place by President Bushâ\x80\x9d, then we can be surer that Bush is a name.', 'Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.', 'On the other hand, if it is seen as McCann Pte.', 'Ltd., then organization will be more probable.', 'With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.', 'Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).', 'The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.', 'Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.', 'For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.', 'Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.', 'However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.', 'This group of features attempts to capture such information.', 'For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.', 'For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.', 'Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.', 'needs to be in initCaps to be considered for this feature.', 'If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.', 'As we will see from Table 3, not much improvement is derived from this feature.', 'The baseline system in Table 3 refers to the maximum entropy system that uses only local features.', 'As each global feature group is added to the list of features, we see improvements to both MUC6 and MUC6 MUC7 Baseline 90.75% 85.22% + ICOC 91.50% 86.24% + CSPP 92.89% 86.96% + ACRO 93.04% 86.99% + SOIC 93.25% 87.22% + UNIQ 93.27% 87.24% Table 3: F-measure after successive addition of each global feature group Table 5: Comparison of results for MUC6 Systems MUC6 MUC7 No.', 'of Articles No.', 'of Tokens No.', 'of Articles No.', 'of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder â\x80\x93 650,000 â\x80\x93 790,000 MENE â\x80\x93 â\x80\x93 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.', 'ICOC and CSPP contributed the greatest im provements.', 'The effect of UNIQ is very small on both data sets.', 'All our results are obtained by using only the official training data provided by the MUC conferences.', 'The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.', 'As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.', ""In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999)."", ""IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998)."", 'MENE has only been tested on MUC7.', 'For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).', 'Besides size of training data, the use of dictionaries is another factor that might affect performance.', 'Bikel et al.', '(1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains.', 'Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions.', 'In MUC6, the best result is achieved by SRA (Krupka, 1995).', 'In (Bikel et al., 1997) and (Bikel et al., 1999), performance was plotted against training data size to show how performance improves with training data size.', ""We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs."", 'For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.', 'In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.', 'Both BBN and NYU have tagged their own data to supplement the official training data.', ""Even with less training data, MENERGI outperforms Borthwick' s MENE + reference resolution (Borthwick, 1999)."", 'Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.', 'The effect of a second reference resolution classifier is not entirely the same as that of global features.', 'A secondary reference resolution classifier has information on the class assigned by the primary classifier.', 'Such a classification can be seen as a not-always-correct summary of global features.', 'The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.', 'We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.', 'Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.', 'Hence we decided to restrict ourselves to only information from the same document.', 'Mikheev et al.', '(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.', 'The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.', 'We have shown that the maximum entropy framework is able to use global information directly.', 'This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).', 'Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs.', 'Information from a sentence is sometimes insufficient to classify a name correctly.', 'Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.', 'We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.', 'Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results.', 'However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English.', 'We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.']",abstractive