{ "paper_id": "M95-1013", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:12:55.259582Z" }, "title": "CRL/NMSU Description of the CRL/NMSU Systems Used for MUC-6", "authors": [ { "first": "Jim", "middle": [], "last": "Cowie", "suffix": "", "affiliation": { "laboratory": "Computing Research Laboratory", "institution": "New Mexico State Universit y", "location": {} }, "email": "jcowie@nmsu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Introductio n This paper discusses the two CRL named entity recognition systems submitted for MUC-6. The systems are based on entirely different approaches. The first is a data intensive method, whic h uses human generated patterns. The second uses the training data to develop decision trees whic h detect the start and end points of names. Background CRL submitted two systems for the Named Entity task. One of these (Basic) is an improved version of the CRL name recognizer developed in phase one of Tipster[1]. The second (AutoLearn) is a system which learns automatically from training data. The Basic system had approximatel y six man months of work in its original development. Improvements for MUC-6 were carried out by one graduate student (about one man month). The AutoLearn system was developed by one graduate student specifically for MUC-6 (also one man month). Availability The Basic system can be accessed for testing through the mail", "pdf_parse": { "paper_id": "M95-1013", "_pdf_hash": "", "abstract": [ { "text": "Introductio n This paper discusses the two CRL named entity recognition systems submitted for MUC-6. The systems are based on entirely different approaches. The first is a data intensive method, whic h uses human generated patterns. The second uses the training data to develop decision trees whic h detect the start and end points of names. Background CRL submitted two systems for the Named Entity task. One of these (Basic) is an improved version of the CRL name recognizer developed in phase one of Tipster[1]. The second (AutoLearn) is a system which learns automatically from training data. The Basic system had approximatel y six man months of work in its original development. Improvements for MUC-6 were carried out by one graduate student (about one man month). The AutoLearn system was developed by one graduate student specifically for MUC-6 (also one man month). Availability The Basic system can be accessed for testing through the mail", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "For the most part the system is context free . A few of the patterns used require some additiona l context before a name is recognized . For example an ambiguous human name in isolation may be recognized if it is followed closely by a title .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The system consists of suite of `C' and lex programs .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Component units recognized by the system are cities, provinces, countries, company prefixes an d suffixes, company beginning and ending words (Club, Association etc .), unambiguous and ambiguous human first and last names, titles and human position words .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Final patterns are used to join together units of the same type which are immediately next to eac h other in the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional Fix-up Procedures", "sec_num": null }, { "text": "After all the pattern based procedures have operated on the text a final pass is made to recogniz e abbreviated forms of names . This takes the lists of names found so far and truncates them . (right to left for person and left to right for companies) . These new lists are then used as lists of known organizations and persons and any occurrences of these in the text are marked . In particular fo r organizations the headline is not processed apart from this last stage . This avoids recognition o f organizations such as \"Leaves Bank\" . The assumption that names mentioned in the heading wil l be repeated in the body of the text holds almost universally .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional Fix-up Procedures", "sec_num": null }, { "text": "The data used in the Basic system is derived from public domain source, university phone lists , the Tipster Gazetteer and government data-bases of company names .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sources", "sec_num": null }, { "text": "The performances of the for the test set and for the walk through article are given in Appendix A . Overall performance was Recall -85% and Precision -87% giving an F measure of 85 .8 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance", "sec_num": null }, { "text": "Performance here was Recall -63% and Precision -83% .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Walk through article", "sec_num": null }, { "text": "The main source of error was missing patterns in the system . For example Robert L . James was partially recognized (as L . James), McCann-Erickson was missed as no hyphenated company pattern had been added. Once a frequently mentioned name is ignored in its full form the syste m unfortunately misses all abbreviated forms . This article also shows the importance of context i n reliably recognizing some names (e .g. an analyst with PayneWebber) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Walk through article", "sec_num": null }, { "text": "The AutoLearn system was developed to explore the possibility of using simple learning algorithms to detect specific features in text . An implementation of Quinlan's ID3 Algorithm was used [2, 3] . This algorithm constructs a decision tree which decides whether an element of a collectio n satisfies a property or not . Each element of a collection has a finite number of attributes each o f which may take one of several values . Quinlan's original paper suggests the range of values of th e attributes should be \"small\". In the case of the AutoLearn system the values are every word occurring in the training collection .", "cite_spans": [ { "start": 190, "end": 193, "text": "[2,", "ref_id": "BIBREF2" }, { "start": 194, "end": 196, "text": "3]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "AutoLearn NE System", "sec_num": null }, { "text": "In order to apply the ID3 algorithm the data needs to be structured into a collection, each membe r of which has specific values for a set of attributes and for each of which it is known whether th e member has a specific property or not . For the name recognition problem the training data wa s converted in tuple of five words . Each tuple was marked as having the start (or end) of a specifi c type of proper name at the middle word of the tuple . This data can be easily generated from the training articles. Thus for the beginning of a person -", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collections for Name Recognitio n", "sec_num": null }, { "text": "many differences between Robert L . -1 differences between Robert L. James 1 between Robert L . James , -1 etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collections for Name Recognitio n", "sec_num": null }, { "text": "Fourteen sets of training data were generated using the 318 development articles supplied for MUC-6. The quality of the tagging is not particularly uniform, but no attempt has been made to improve this.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collections for Name Recognitio n", "sec_num": null }, { "text": "As each word of the training data is read it is hashed and stored in a hash table . Thereafter words are referred to by their hash values . For each of the values of the five attributes (words 1 throug h 5) a count is maintained of the number of times this value contributed to an element holding a proper named occurrence at the middle attribute. The attribute to be tested first is chosen by computing for each value the relative frequency of positive and negative outcomes for this value . This is used to approximate the information content of that attribut e -p+log2p+ -p log 2p-(EQ 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating the decision trees", "sec_num": null }, { "text": "The sum of the approximate information contents for each column is calculated and the colum n with the highest value is chosen as the primary decision . Here all the values which always contributed to a positive outcome are used as the primary decision . Values which are always negative are ignored (this is primarily to reduce the size of the data being handled) . New sub-collections are formed with elements containing one value which contributed both to positive or negative outcomes are collected and the tree building process is repeated for each of these new collections .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating the decision trees", "sec_num": null }, { "text": "The decision trees thus formed can be output in a readable if somewhat lengthy form . In most cases the first choice is the third word in a group taking one of a large number of values . Thereafter a group off fairly impenetrable tests occur. For example for location beginnings -If word 3 is one of the following -Milwaukee Ridgefild Pa ST.. (around 300 more words) then location_beginning else if word 3 is Illinois and word 1 is Indiana then location_beginnin g else if word 3 is Northeast and word 1 is `in' then location_beginnning", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating the decision trees", "sec_num": null }, { "text": "The printed decision table takes about 5 pages .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating the decision trees", "sec_num": null }, { "text": "A pass through the texts is made for each decision tree (beginning and end) of each named entity .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running the AutoLearn System", "sec_num": null }, { "text": "First the hash table of words is read and the corresponding decision tree . The text is then processed in groups of five words . Whenever a positive decision is made a new tag is added to th e output stream .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running the AutoLearn System", "sec_num": null }, { "text": "Ideally at this stage the tagging would be done . However, given that we are processing new texts , there are many occasions where an end or a beginning is identified, but the corresponding beginning or end is not. For example a surname may have been seen previously, but not the attache d forename . At this point a heuristic is applied which for every un-matched bracket in the text work s forward or backward until some appropriate point is reached . The actual skipping heuristics need to be different for organizations, persons, locations, dates and numbers .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running the AutoLearn System", "sec_num": null }, { "text": "The only data source used for the AutoLearn system was the 318 MUC-6 training texts .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sources", "sec_num": null }, { "text": "A high precision was expected from this system . Most of the errors that occur are due to failures of the bracket insertion heuristic . The overall scores were Recall -47% and Precision-81% giving an F measure of 59 .3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance", "sec_num": null }, { "text": "No specific code was inserted to handle numbers or dates . The method was more successful wit h organizations and locations then with persons . More training data is perhaps required to make th e system aware of the spread of examples for human names .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance", "sec_num": null }, { "text": "The performance here was Recall -36% and Precision -88% .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Walk through article", "sec_num": null }, { "text": "The major problem here is that the system has not learned a rule which uses \"Mr .\" to identify the word previous to a name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Walk through article", "sec_num": null }, { "text": "The evaluation texts were processed with decision trees generated using subsets of the MUC-6 development data . This was done in increasing units of 50 texts . The results are shown in Figure 1 below. Both recall and precision increase with increasing training data . Precision appears to tail off at around 82% . Recall, however, increases (with one exception) steadily over the whole range . ", "cite_spans": [], "ref_spans": [ { "start": 185, "end": 193, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Relationship of Performance to Amount of Training Data .", "sec_num": null }, { "text": "We intend to rebuild the Basic system. One of the principle drawbacks of the system is its sequential application of component tags . In many cases a second tag is not applied because the word o r phrase is ambiguous . The correct solution here is to apply all tags in a manner that allows the correct tags to be selected by the pattern processing mechanisms . In addition we plan to improve our collection of patterns. The current version of the system is being made generally available. This, we hope, will provide us with some feedback on patterns and errors in the data files .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Developments", "sec_num": null }, { "text": "Some further experiments are also planned with the AutoLearn system . The main drawback with the system is that it doesn't make maximal use of the training data in so far that with small training samples one word may be sufficient to make a decision . This situation can probably be improved by replacing specific words with a NULL word. This will force he system to develo p rules based more on context . In particular when the system encounters unknown words these will be considered equivalent to the NULL word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Developments", "sec_num": null }, { "text": "We also intend to apply the learning method described here to other NLP tasks such as part o f speech tagging and disambiguation .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Developments", "sec_num": null }, { "text": "Appendix A -Basic System Scores Appendix A -Basic System Walk-through Score s ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Developments", "sec_num": null }, { "text": "Enamex", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SLOT POS ACTI COR PAR INC' SPU MIS NON' REC PRE UND OVG ERR SUB -----------------------------------------------------------------------------", "sec_num": null }, { "text": "----------------------------------------------------------------------------- P&R 2P&R P&2 R F-MEASURES 59 .38 70 .57 51 .2 5 ----------------------------------------------------------------------------- SLOT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SLOT POS ACTI COR PAR INC' SPU MIS NON' REC PRE UND OVG ERR SUB -----------------------------------------------------------------------------", "sec_num": null }, { "text": "0.0 0 45.00 40 .00", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The CRL/Bradneis System us Used for MUC-5", "authors": [ { "first": "J", "middle": [], "last": "Cowie", "suffix": "" }, { "first": "L", "middle": [], "last": "Guthrie", "suffix": "" }, { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "S", "middle": [], "last": "Waterman", "suffix": "" }, { "first": "T", "middle": [], "last": "Wakao", "suffix": "" } ], "year": null, "venue": "Proceedings of the Fifth Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cowie, J .,Guthrie, L., Pustejovsky, J ., Waterman, S., and Wakao, T., The CRL/Bradneis Sys- tem us Used for MUC-5 In Proceedings of the Fifth Message Understanding Conference (MUC-", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Discovering Rules by Induction from Large Collections of Examples", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Quinlan", "suffix": "" } ], "year": 1979, "venue": "Expert Systems in the Micro-Electronic Age", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quinlan, J.R. Discovering Rules by Induction from Large Collections of Examples .In Expert Systems in the Micro-Electronic Age, ed Michie D ., Edinburgh University Press, 1979 .", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Machine Learning : Easily Understood Decision Rules", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Quinlan", "suffix": "" } ], "year": 1991, "venue": "Computer Systems that Learn", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quinlan, J.R. Machine Learning : Easily Understood Decision Rules .In Computer Systems that Learn, eds. Weiss S.M. and Kulikowski C .A., Morgan Kaufmann, 1991 .", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "Relationship of Performance to Amount of Training" }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "--------------------------------------------------------------------------* * * DOCUMENT SECTION SCORES * * * SLOT POS ACTI COR PAR INCI SPU MIS NONI REC PRE UND OVG ERR SU" }, "TABREF0": { "num": null, "type_str": "table", "html": null, "content": "
<enamex>9258891 84100148 840191 9595 140
type9258891 7840571 48 840185 8895 197
text9258891 7550861 48 840182 8595 22 1 0
subtotals1850 17781 15390 1431 96 1680183 8695 218
<timex>1111081 1020016901 92 9486 130
type1111081 102001690192 9486 130
text1111081920101690183 8586 21 1 0
subtotals2222161 1940101 12 180187 9086 175
<numex>93102191001 11201 98 892 11 120
type93102191001 11201 98 892 11 120
text93102188031 1120195 862 11 153
subtotals1862041 179031 2240196 882 11 142
ALL OBJECTS2258 21981 19120 1561 130 19001 85 8786 208
MATCHED ONLY2068 20681 19120 1561000192 920088
", "text": "POS ACT' COR PAR INC! SPU MIS NONI REC PRE UND OVG ERR SU B" } } } }