|
{ |
|
"paper_id": "C98-1012", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:28:35.810221Z" |
|
}, |
|
"title": "Entity-Based Cross-Document Coreferencing Using the Vector Space Model", |
|
"authors": [ |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Bagga", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Duke University Durham", |
|
"location": { |
|
"postBox": "Box 90129", |
|
"postCode": "27708-0129", |
|
"region": "NC" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Breck", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Pennsylvania", |
|
"location": { |
|
"addrLine": "3401 Walnut St. 400C Philadelphia", |
|
"postCode": "19104", |
|
"region": "PA" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Cross-document coreference occurs when the same person, place, event, or concept is discussed in more than one text source. Computer recognition of this phenomenon is important because it helps break \"the document boundary\" by allowing a user to examine information about a particular entity from multiple text sources at the same time. In this paper we describe a cross-document coreference resolution algorithm which uses the Vector Space Model to resolve ambiguities between people having the same name. In addition, we also describe a scoring algorithm for evaluating the cross-document coreference chains produced by our system and we compare our algorithm to the scoring algorithm used in the MUC-6 (within document) coreference task.", |
|
"pdf_parse": { |
|
"paper_id": "C98-1012", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Cross-document coreference occurs when the same person, place, event, or concept is discussed in more than one text source. Computer recognition of this phenomenon is important because it helps break \"the document boundary\" by allowing a user to examine information about a particular entity from multiple text sources at the same time. In this paper we describe a cross-document coreference resolution algorithm which uses the Vector Space Model to resolve ambiguities between people having the same name. In addition, we also describe a scoring algorithm for evaluating the cross-document coreference chains produced by our system and we compare our algorithm to the scoring algorithm used in the MUC-6 (within document) coreference task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Cross-document coreference occurs when the same person, place, event, or concept is discussed in more than one text source. Computer recognition of this phenomenon is important because it helps break \"the document boundary\" by allowing a user to examine information about a particular entity from multiple text sources at the same time. In particular, resolving cross-document coreferences allows a user to identify trends and dependencies across documents. Cross-document coreference can also be used as the central tool for producing summaries from multiple documents, and for information fusion, both of which have been identified as advanced areas of research by the TIPSTER Phase III program. Cross-document coreference was also identified as one of the potential tasks for the Sixth Message Understanding Conference (MUC-6) but was not included as a formal task because it was considered too ambitious (Grishman 94) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 908, |
|
"end": 921, |
|
"text": "(Grishman 94)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we describe a highly successful crossdocument coreference resolution algorithm which uses the Vector Space Model to resolve ambiguities between people having the same name. In addition, we also describe a scoring algorithm for evaluating the cross-document coreference chains produced by our system and we compare our algorithm to the scoring algorithm used in the MUC-6 (within document) coreference task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Cross-Document Coreference: The Problem Cross-document corefereuce is a distinct technology from Named Entity recognizers like IsoQuest's Ne-tOwl and IBM's Textract because it attempts to determine whether name; matches are actually the same individual (not all John Smiths are the same).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Neither NetOwl or Textract have mechanisms which try to keep same-named individuals distinct if they are different people.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Cross-document coreference also differs in substantial ways from within-document coreference. Within a document there is a certain amount of consistency which cannot be expected across documents. In addition, the problems encountered during within document coreference are compounded when looking for coreferences across documents because the underlying principles of linguistics and discourse context no longer apply across documents. Because the underlying assumptions in crossdocument coreference are so distinct, they require novel approaches. Figure 1 shows the architecture of the crossdocument system developed. The system is built upon the University of Pennsylvania's within document coreference system, CAMP, which participated in the Seventh Message Understanding Conference (MUC-7) within document coreference task (MUC-7 1998).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 548, |
|
"end": 556, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our system takes as input the coreference processed documents output by CAMP. It then passes these documents through the SentenceExtractor module which extracts, for each document, all the sentences relevant to a particular entity of interest. The VSM-Disambiguate module then uses a vector space model algorithm to compute similarities between the sentences extracted for each pair of documents. Oliver \"Biff\" Kelly of Weymouth succeeds John Perry as president of the Massachusetts Golf Association. \"~Ve will have continued growth in the future,\" said Kelly, who will serve for two years. \"There's been a lot of changes and there will be continued changes as we head into the year 2000.\" Details about each of the main steps of the crossdocument coreference algorithm are given below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture and the Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 First, for each article, CAMP is run on the article. It produces coreference chains for all the entities mentioned in the article. Next, for the coreference chain of interest within each article (for example, the coreference chain that contains \"John Perry\"), the Sentence Extractor module extracts all the sentences that contain the noun phrases which form the coreference chain. In other words, the SentenceExtractor module produces a \"summary\" of the article with respect to the entity of interest. These summaries are a special case of the query sensitive techniques being developed at Penn using CAMP. Therefore, for doc.36 ( Figure 2 ), since at least one of tile three noun phrases (\"John Perry,\" \"he,\" and \"Perry\") in the coreference chain of interest appears in each of the three sentences in the extract, the summary produced by SentenceExtractor is the extract itself. On the other hand, the summary produced by Sen-tenceExtractor for the coreference chain of interest in doc.38 is only the first sentence of the extract because the only element of the coreference chain appears in this sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 633, |
|
"end": 641, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Architecture and the Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": ",, For each article, the VSM-Disambiguate module uses the summary extracted by the Sen-tenceExtractor and computes its similarity with the suminaries extracted from each of the other articles. Summaries having similarity above a certain threshold are considered to be regarding the same entity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture and the Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "4 University of Pennsylvania's", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture and the Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The University of Pennsylvania's CAMP system resolves within document coreferences for several different classes including pronouns, and proper names (Baldwin 95) . It ranked among the top systems in the coreference task during the MUC-6 and the MUC-7 evaluations. The coreference chains output by CAMP enable us to gather all the information about the entity of interest in an article. This information about the entity is gathered by the SentenceExtractor module and is used by the VSM-Disambiguate module for disambiguation purposes. Consider the extract for doc.a6 shown in Figure 2 . We are able to include the fact that the John Perry mentioned in this article was the president of the Massachusetts Golf Association only because CAMP recognized that the \"he\" in the second sentence is coreferent with \"John Perry\" in tile first. And it is this fact which actually helps VSM-Disambiguate decide that the two John Perrys in doc.36 and doc.38 are the same person.", |
|
"cite_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 162, |
|
"text": "(Baldwin 95)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 578, |
|
"end": 586, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "CAMP System", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Vector Space Model The vector space model used for disambiguating entities across documents is the standard vector space model used widely in information retrieval (Salton 89 ). In this model, each summary extracted by the SentenceExtractor module is stored as a vector of terms. The terms in the vector are in their morphological root form and are filtered for stop-words (words that have no information content like a, the, of, an, ... ). If $1 and $2 are the vectors for the two summaries extracted fl'om documents D1 and D2, then their similarity is computed as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 178, |
|
"text": "(Salton 89", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Sim(Sl,S2) = ~ Wlj x w2j common terms tj", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where tj is a term present in both $1 and S~, wlj is the weight of the term tj in $1 and w2j is the weight of tj in $2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The weight of a term tj in the vector Si for a summary is given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "tf x log Wij ~+...+2- ~S21 -~ 8i2 8i n", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where tf is the frequency of the term tj in the summary, N is the total number of documents in the collection being examined, and df is the number of documents in the collection that the term tj occurs 2 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "in. x/s~l + si2 + ... + si, , is tile cosine normalization factor and is equal to the Euclidean length of the vector Si. The VSM-Disambiguate module, for each summary Si, computes the similarity of that summary with each of the other summaries. If the similarity computed is above a pre-defined threshold, then the entity of interest in the two summaries are considered to be coreferent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The cross-document coreference system was tested on a highly ambiguous test set which consisted of 197 articles from 1996 and 1997 editions of tile New York Times. The sole criteria for including an article in the test set was the presence or the absence of a string in the article which matched the \"/John.*?Smith/\" regular expression. In other words, all of the articles either contained the name John Smith or contained some variation with a middle initial/name. The system did not use any New York Times data for training purposes. The answer keys regarding the cross-document chains were manually created, but the scoring was completely automated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There were 35 different John Smiths mentioned in the articles. Of these, 24 of them only had one article which mentioned them. The other 173 articles were regarding the 11 remaining John Smiths. The background of these John Smiths , and the number of articles pertaining to each, varied greatly. Descriptions of a few of the John Smiths are: Chairman and CEO of General Motors, assistant track coach at UCLA, the legendary explorer, and the main character in Disney's Pocahontas, former president of the Labor Party of Britain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of the Data", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Scoring the Output", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to score the cross-document coreference chains output by the system, we had to map the cross-document coreference scoring problem to a within-document coreference scoring problem. This was done by creating a meta document consisting of the file names of each of the documents that the system was run on. Assuming that each of the documents in the data set was about a single John Smith, the cross-document coreference chains produced by the system could now be evaluated by scoring the corresponding within-document coreference chains in the meta document. We used two different scoring algorithms for scoring the output. The first was the standard algorithm for within-document coreference chains which was used for the evaluation of the systems participating in the MUC-6 and the MUC-7 coreference tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The shortcomings of the MUC scoring algorithm when used for the cross-document coreference task forced us to develop a second algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Details about both these algorithms follow.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The MUC algorithm computes precision and recall statistics by looking at the number of links identified by a system compared to the links in an answer key. In the model-theoretic description of the algorithm that follows, the term \"key\" refers to the manually annotated coreference chains (the truth) while the term \"response\" refers to the coreference chains output by a system. An equivalence set is the transitive closure of a coreference chain. The algorithm, developed by (Vilain 95) , computes recall in the following way. First, let S be an equivalence set generated by the key, and let R1... Rm be equivalence classes generated by the response. Then we define the following functions over S:", |
|
"cite_spans": [ |
|
{ |
|
"start": 477, |
|
"end": 488, |
|
"text": "(Vilain 95)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The MUC Coreference Scoring Algorithm 1", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "\u2022 p(S) is a partition of S relative to the response.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The MUC Coreference Scoring Algorithm 1", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "Each subset of S in the partition is formed by intersecting S and those response sets Ri that overlap S. Note that the equivalence classes defined by the response may include implicit singleton sets -these correspond to elements that are mentioned in the key but not in the response. For example, say the key generates the equivalence class S = {A B C D}, and the response is simply <A-B>. The relative partition p(S) is then {A B} {C} and {D}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The MUC Coreference Scoring Algorithm 1", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "\u2022 e(S) is the minimal number of \"correct\" links necessary to generate the equivalence class S. It is clear that c(S) is one less than the cardinality of s, i.e., c(S) = (ISl-1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The MUC Coreference Scoring Algorithm 1", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "\u2022 re(S) is the number of \"missing\" links in the response relative to the key set S. As noted above, this is the number of links necessary to 1The exposition of this scorer has been taken nearly entirely from (Vilain 95) . fully reunite any components of the p(S) partition. We note that this is simply one fewer than the number of elements in the partition, that is, m(S) = (]p(S)I-1) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 219, |
|
"text": "(Vilain 95)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The MUC Coreference Scoring Algorithm 1", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "Looking in isolation at a single equivalence class in the key, the recall error for that class is just the number of missing links divided by the number of correct links, i.e., c(S) \" Recall in turn is ~ which equals e(s) ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The MUC Coreference Scoring Algorithm 1", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "The whole expression can now be simplified to IS]-Ip(S)I", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(PSI-1) -(]P(S)I-1) Isl-1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Precision is computed by switching the roles of the key and response in the above formulation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Isl-1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "While the (Vilain 95) provides intuitive results for coreference scoring, it however does not work as well in the context of evaluating cross document coreference. There are two main reasons.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Shortcomings of the MUC Scoring Algorithm", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "1. The algorithm does not give any credit for separating out singletons (entities that occur in chains consisting only of one element, the entity itself) from other chains which have been identified. This follows from the convention in 2. All errors are considered to be equal. The MUC scoring algorithm penalizes the precision numbers equally for all types of errors. It is our position that, for certain tasks, some coreference errors do more damage than others. Consider the following examples: suppose the truth contains two large coreference chains and one small one ( Figure 6 ), and suppose Figures 7 and 8 show two different responses. We will explore two different precision errors. The first error will connect one of the large coreferenee chains with the small one ( Figure 7) . The second error occurs when the two large coreference chains are related by the errant coreferent link (Figure 8 ). It is our position that the second error is more damaging because, compared to the first error, the second error makes more entities coreferent that should not be. This distinction is not reflected in the (Vilain 95) scorer which scores both responses as having a precision score of 90% (Figure 9 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 574, |
|
"end": 582, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 778, |
|
"end": 787, |
|
"text": "Figure 7)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 894, |
|
"end": 903, |
|
"text": "(Figure 8", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 1194, |
|
"end": 1203, |
|
"text": "(Figure 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Shortcomings of the MUC Scoring Algorithm", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "Imagine a scenario where a user recalls a collection of articles about John Smith, finds a single article about the particular John Smith of interest and wants to see all the other articles about that individual. In commercial systems with News data, precision is typically the desired goal in such settings. As a result we wanted to model the accuracy of the system on a per-document basis and then build a more global score based on the sum of the user's experiences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our B-CUBED Scoring Algorithm 2", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "Consider the case where the user selects document 6 in Figure 8 . This a good outcome with all the relevant documents being found by the system and no extraneous documents. If the user selected document 1, then there are 5 irrelevant documents in the systems output -precision is quite low then. The goal of our scoring algorithm then is to model the precision and recall on average when looking for more documents about the same person based on selecting a single document.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 55, |
|
"end": 63, |
|
"text": "Figure 8", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Our B-CUBED Scoring Algorithm 2", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "Instead of looking at the links produced by a system, our algorithm looks at the presence/absence of entities from the chains produced. Therefore, we compute the precision and recall numbers for each entity in the document. The numbers computed with respect to each entity in the document are then combined to produce final precision and recall numbers for the entire output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our B-CUBED Scoring Algorithm 2", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "For an entity, i, we define the precision and recall with respect to that entity in Figure 10 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 93, |
|
"text": "Figure 10", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Our B-CUBED Scoring Algorithm 2", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "The final precision and recall numbers are computed by the following two formulae:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our B-CUBED Scoring Algorithm 2", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "N Final Precision = ~ Wi * Precision, i=l N Final Recall = E wi * Recalli i~-i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our B-CUBED Scoring Algorithm 2", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "where N is the number of entities in the document, and wi is the weight assigned to entity i in the doeument. For all the examples and the cxperiments in this paper we assign equal weights to each entity i.e. wi = 1IN. We have also looked at the possibilities of using other weighting schemes. Nlrther details about the B-CUBED algorithm including a model theoretic version of tile algorithm carl be found in (Bagga 98a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 409, |
|
"end": 420, |
|
"text": "(Bagga 98a)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our B-CUBED Scoring Algorithm 2", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "Consider the response shown in Figure 7 . Using the B-CUBED algorithm, the precision for entity-6 in the document equals 2/7 because the chain output for the entity contains 7 elements, 2 of which are correct, namely {6,7}. The recall for entity-6, however, is 2/2 because the chain output for the entity has 2 correct elements in it and the \"truth\" chain for the entity only contains those 2 elements. Figure 9 shows the final precision and recall numbers computed by the B-CUBED algorithm for the examples shown in Figures 7 and 8 . The figure also shows the precision and recall numbers for each entity (ordered by entity-numbers).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 39, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 411, |
|
"text": "Figure 9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 517, |
|
"end": 532, |
|
"text": "Figures 7 and 8", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Our B-CUBED Scoring Algorithm 2", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "The B-CUBED algorithm does overcome the the two main shortcomings of the MUC scoring algorithm shortcoming of the MUC-6 algorithm by calculating the precision and recall numbers for each entity in the document (irrespective of whether an entity is part of a coreference chain). Consider the responses shown in Figures 7 and 8 . We had mentioned earlier that the error of linking the the two large chains in the second response is more damaging than the error of linking one of the large chains with the smaller chain in the first response. Our scoring algorithm takes this into account and computes a final precision of 58% and 76% for the two responses respectively. In comparison, the MUC algorithm computes a precision of 90% for both the responses (Figure 9 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 310, |
|
"end": 325, |
|
"text": "Figures 7 and 8", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 752, |
|
"end": 761, |
|
"text": "(Figure 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Overcoming the Shortcomings of the MUC Algorithm", |
|
"sec_num": "7.4" |
|
}, |
|
{ |
|
"text": "Results Figure 11 shows the precision, recall, and F-Measure (with equal weights for both precision and recall) using the B-CUBED scoring algorithm. The Vector Space Model in this case constructed the space of terms only from the summaries extracted by Sen-tenceExtractor. In comparison, Figure 12 shows the results (using the B-CUBED scoring algorithm) when the vector space model constructed the space of terms from the articles input to the system (it still used the summaries when computing the similarity). The importance of using CAMP to extract summaries is verified by comparing the highest F-Measures achieved by the system for the two cases. The highest F-Measure for the former case is 84.6% while the highest F-Measure for the latter case is 78.0%. In comparison, for this task, named-entity tools like NetOwl and Textract would mark all the John Smiths the same. Their performance using our Figures 13 and 14 show the precision, recall, and F-Measure calculated using the MUC scoring algorithm. Also, the baseline case when all the John Smiths are considered to be the same person achieves 83% precision and 100% recall. The high initial precision is mainly due to the fact that the MUC algorithm assumes that all errors are equal.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 17, |
|
"text": "Figure 11", |
|
"ref_id": "FIGREF7" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 297, |
|
"text": "Figure 12", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 904, |
|
"end": 921, |
|
"text": "Figures 13 and 14", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "8", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have also tested our system on other classes of cross-document coreference like names of companies, and events. Details about these experiments can be found in (Bagga 98b) . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 174, |
|
"text": "(Bagga 98b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "8", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As a novel research probleni, cross document coreference provides an different perspective from related phenomenon like named entity recognition and within document coreferenee. Our system takes summaries about an entity of interest and uses various information retrieval metrics to rank the similarity of the summaries. We found it quite challenging to arrive at a scoring metric that satisfied our intuitions about what was good system output v.s. bad, but we have developed a scoring algorithm that is an improvement for this class of data over other within document coreference scoring algorithms. Our resuits are quite encouraging with potential performance being as good as 84.6% (F-Measure). ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "2The main idea of this algorithm was initially put forth by Alan W. Biermann of Duke University.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The first author was supported in part by a Fellowship from IBM Corporation, and in part by the Institute for Research in Cognitive Science at the University of Pennsylvania.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": "10" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Algorithms for Scoring Coreference Chains", |
|
"authors": [ |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Bagga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Breck", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bagga, Amit, and Breck Baldwin. Algorithms for Scoring Coreference Chains. To appear at The First International Conference on Language Re- sources and Evaluation Workshop on Linguistics Coreference, May 1998.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "How Much Processing Is Required for Cross-Document Corderence? rio appear at The First International Conference on Language Resources and Evaluation on Linguistics Coreferenee", |
|
"authors": [ |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Bagga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Breck", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bagga, Amit, and Breck Baldwin. How Much Pro- cessing Is Required for Cross-Document Corder- ence? rio appear at The First International Con- ference on Language Resources and Evaluation on Linguistics Coreferenee, May 1998.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "University of Pennsylvania: Description of the University of Pennsylvania System Used for MUC-6", |
|
"authors": [ |
|
{ |
|
"first": "Breck", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the Sixth Message Understanding Conference (MUC-6)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baldwin, Breck, et al. University of Pennsylva- nia: Description of the University of Pennsylva- nia System Used for MUC-6, Proceedings of the Sixth Message Understanding Conference (MUC- 6), pp. 177-191, November 1995.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Whither Written Language Evaluation?", |
|
"authors": [ |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the Human Language Technology Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "120--125", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grishman, Ralph. Whither Written Language Eval- uation?, Proceedings of the Human Language Technology Workshop, pp. 120-125, March 1994, San Francisco: Morgan Kaufmann.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Proceedings of the Seventh Message Understanding Conference (MUC-7)", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Proceedings of the Seventh Message Understanding Conference (MUC-7), April 1998.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Automatic Text Processing: The Transformation, Analysis, and Retrieval of Information by Computer", |
|
"authors": [ |
|
{ |
|
"first": "Gerard", |
|
"middle": [], |
|
"last": "Salton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Salton, Gerard. Automatic Text Processing: The Transformation, Analysis, and Retrieval of In- formation by Computer, 1989, Reading, MA: Addison-Wesley.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A Model-Theoretic Coreference Scoring Scheme", |
|
"authors": [ |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Vilain", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the Sixth Message Understanding Conference (MUC-6)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vilain, Marc, et al. A Model-Theoretic Coreference Scoring Scheme, Proceedings of the Sixth Message Understanding Conference (MUC-6), pp. 45-52, November 1995, San Francisco: Morgan Kauf- mann.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Coreference Chains for doc.36", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "For example, consider the two extracts in Figures 2 and 4. The coreference chains output by CAMP for the two extracts are shown in Figures 3 and 5.", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Coreference Chains for doc.38", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Figure 6: Truth", |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Response: Example 2 coreference annotation of not identifying those entities that are markable as possibly coreferent with other entities in the text. Rather, entities are only marked as being coreferent if they actually are coreferent with other entities in the text. This shortcoming could be easily enough overcome with different annotation conventions and with minor changes to the algorithm, but it is worth noting.", |
|
"num": null |
|
}, |
|
"FIGREF5": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "discussed earlier. It implicitly overcomes the first Output MUC Algorithm B-CUBED Algorithm (equal weights for every entity) Scores of Both Algorithms o11 the Examples number of correct elements in the output chain containing entityi Recalli = number of elements in the output chain containing entity~ number of correct elements in the output chain containing entityi number of elements in the truth chain containing entityi Definitions for Precision and Recall for an Entity i", |
|
"num": null |
|
}, |
|
"FIGREF7": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Precision, Recall, and F-Measure Using the B-CUBED Algorithm With Training On the Summaries scoring algorithm is 23% precision, and 100% recall.", |
|
"num": null |
|
}, |
|
"FIGREF8": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Precision, Recall, and F-Measure Using the MUC Algorithm With Training On the Summaries", |
|
"num": null |
|
}, |
|
"FIGREF9": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Precision, Recall, and F-Measure Using the MUC Algorithm With Training On Entire Articles", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |