|
{ |
|
"paper_id": "C04-1033", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:21:04.966594Z" |
|
}, |
|
"title": "An NP-Cluster Based Approach to Coreference Resolution", |
|
"authors": [ |
|
{ |
|
"first": "Xiaofeng", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Guodong", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Chew", |
|
"middle": [], |
|
"last": "Lim", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National University of Singapore", |
|
"location": { |
|
"postCode": "117543", |
|
"settlement": "Singapore" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Traditionally, coreference resolution is done by mining the reference relationships between NP pairs. However, an individual NP usually lacks adequate description information of its referred entity. In this paper, we propose a supervised learning-based approach which does coreference resolution by exploring the relationships between NPs and coreferential clusters. Compared with individual NPs, coreferential clusters could provide richer information of the entities for better rules learning and reference determination. The evaluation done on MEDLINE data set shows that our approach outperforms the baseline NP-NP based approach in both recall and precision.", |
|
"pdf_parse": { |
|
"paper_id": "C04-1033", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Traditionally, coreference resolution is done by mining the reference relationships between NP pairs. However, an individual NP usually lacks adequate description information of its referred entity. In this paper, we propose a supervised learning-based approach which does coreference resolution by exploring the relationships between NPs and coreferential clusters. Compared with individual NPs, coreferential clusters could provide richer information of the entities for better rules learning and reference determination. The evaluation done on MEDLINE data set shows that our approach outperforms the baseline NP-NP based approach in both recall and precision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Coreference resolution is the process of linking as a cluster 1 multiple expressions which refer to the same entities in a document. In recent years, supervised machine learning approaches have been applied to this problem and achieved considerable success (e.g. Aone and Bennett (1995) ; McCarthy and Lehnert (1995) ; Soon et al. (2001) ; Ng and Cardie (2002b) ). The main idea of most supervised learning approaches is to recast this task as a binary classification problem. Specifically, a classifier is learned and then used to determine whether or not two NPs in a document are co-referring. Clusters are formed by linking coreferential NP pairs according to a certain selection strategy. In this way, the identification of coreferential clusters in text is reduced to the identification of coreferential NP pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 263, |
|
"end": 286, |
|
"text": "Aone and Bennett (1995)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 316, |
|
"text": "McCarthy and Lehnert (1995)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 319, |
|
"end": 337, |
|
"text": "Soon et al. (2001)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 340, |
|
"end": 361, |
|
"text": "Ng and Cardie (2002b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One problem of such reduction, however, is that the individual NP usually lacks adequate descriptive information of its referred entity. Consequently, it is often difficult to judge whether or not two NPs are talking about the same entity simply from the properties of the pair alone. As an example, consider the pair of a non-pronoun and its pronominal antecedent candidate. The pronoun itself gives few clues for the reference determination. Using such NP pairs would have a negative influence for rules learning and subsequent resolution. So far, several efforts (Harabagiu et al., 2001; Ng and Cardie, 2002a; Ng and Cardie, 2002b) have attempted to address this problem by discarding the \"hard\" pairs and select only those confident ones from the NP-pair pool. Nevertheless, this eliminating strategy still can not guarantee that the NPs in \"confident\" pairs bear necessary description information of their referents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 566, |
|
"end": 590, |
|
"text": "(Harabagiu et al., 2001;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 612, |
|
"text": "Ng and Cardie, 2002a;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 613, |
|
"end": 634, |
|
"text": "Ng and Cardie, 2002b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we present a supervised learning-based approach to coreference resolution. Rather than attempting to mine the reference relationships between NP pairs, our approach does resolution by determining the links of NPs to the existing coreferential clusters. In our approach, a classifier is trained on the instances formed by an NP and one of its possible antecedent clusters, and then applied during resolution to select the proper cluster for an encountered NP to be linked. As a coreferential cluster offers richer information to describe an entity than a single NP in the cluster, we could expect that such an NP-Cluster framework would enhance the resolution capability of the system. Our experiments were done on the the MEDLINE data set. Compared with the baseline approach based on NP-NP framework, our approach yields a recall improvement by 4.6%, with still a precision gain by 1.3%. These results indicate that the NP-Cluster based approach is effective for the coreference resolution task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remainder of this paper is organized as follows. Section 2 introduces as the baseline the NP-NP based approach, while Section 3 presents in details our NP-Cluster based approach. Section 4 reports and discusses the experimental results. Section 5 describes related research work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Finally, conclusion is given in Section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2 Baseline: the NP-NP based approach", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We built a baseline coreference resolution system, which adopts the common NP-NP based learning framework as employed in (Soon et al., 2001) . Each instance in this approach takes the form of i {NP j , NP i }, which is associated with a feature vector consisting of 18 features (f 1 \u223c f 18 ) as described in Table 2 . Most of the features come from Soon et al. (2001)'s system. Inspired by the work of (Strube et al., 2002) and (Yang et al., 2004) , we use two features, StrSim1 (f 17 ) and StrSim2 (f 18 ), to measure the string-matching degree of NP j and NP i . Given the following similarity function:", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 140, |
|
"text": "(Soon et al., 2001)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 402, |
|
"end": 423, |
|
"text": "(Strube et al., 2002)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 428, |
|
"end": 447, |
|
"text": "(Yang et al., 2004)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 315, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Framework description", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Str Simlarity(Str 1 , Str 2 ) = 100 \u00d7 |Str 1 \u2229 Str 2 | Str 1 StrSim1 and StrSim2 are computed using Str Similarity(S N P j , S N P i ) and Str Similarity(S N P i , S N P j ), respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framework description", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Here S N P is the token list of NP, which is obtained by applying word stemming, stopword removal and acronym expansion to the original string as described in Yang et al. (2004)'s work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framework description", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "During training, for each anaphor NP j in a given text, a positive instance is generated by pairing NP j with its closest antecedent. A set of negative instances is also formed by NP j and each NP occurring between NP j and NP i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framework description", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "When the training instances are ready, a classifier is learned by C5.0 algorithm (Quinlan, 1993) . During resolution, each encountered noun phrase, NP j , is paired in turn with each preceding noun phrase, NP i . For each pair, a testing instance is created as during training, and then presented to the decision tree, which returns a confidence value (CF) 2 indicating the likelihood that NP i is coreferential to NP j . In our study, two antecedent selection strategies, Most Recent First (MRF) and Best First (BF), are tried to link NP j to its a proper antecedent with CF above a threshold (0.5). MRF (Soon et al., 2001 ) selects the candidate closest to the anaphor, while BF (Aone and Bennett, 1995; Ng and Cardie, 2002b) selects the candidate with the maximal CF.", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 96, |
|
"text": "(Quinlan, 1993)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 605, |
|
"end": 623, |
|
"text": "(Soon et al., 2001", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 681, |
|
"end": 705, |
|
"text": "(Aone and Bennett, 1995;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 706, |
|
"end": 727, |
|
"text": "Ng and Cardie, 2002b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framework description", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Nevertheless, the problem of the NP-NP based approach is that the individual NP usually lacks adequate description information about its referred entity. Consequently, it is often difficult to determine whether or not two NPs refer to the same entity simply from the properties of the pair. See the the text segment in Table 1 [ 7 This mutant] also functions in vivo as a transacting dominant negative regulator:. . . mutant] are annotated in the same coreferential cluster. According to the above framework, NP 7 and its closest antecedent, NP 4 , will form a positive instance. Nevertheless, such an instance is not informative in that NP 4 bears little information related to the entity and thus provides few clues to explain its coreference relationship with NP 7 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 319, |
|
"end": 326, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Limitation of the approach", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In fact, this relationship would be clear if [ 1 A mutant of KBF1/p50], the antecedent of NP 4 , is taken into consideration. NP 1 gives a detailed description of the entity. By comparing the string of NP 7 with this description, it is apparent that NP 7 belongs to the cluster of NP 1 , and thus should be coreferential to NP 4 . This suggests that we use the coreferential cluster, instead of its single element, to resolve an NP correctly. In our study, we propose an approach which adopts an NP-Cluster based framework to do resolution. The details of the approach are given in the next section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitation of the approach", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Features describing the relationships between NP j and NP i 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitation of the approach", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "DefNp 1 1 if NP j is a definite NP; else 0 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitation of the approach", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "DemoNP 1 1 if NP j starts with a demonstrative; else 0 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitation of the approach", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "IndefNP 1 1 if NP j is an indefinite NP; else 0 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitation of the approach", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Pron 1 1 if NP j is a pronoun; else 0 5. Table 2 : The features in our coreference resolution system (Features 1 \u223c 18 are also used in the baseline system using NP-NP based approach) resentation, the training and the resolution procedures, in the following subsections.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 48, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Limitation of the approach", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "An instance in our approach is composed of three elements like below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "i {NP j , C k , NP i }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where NP j , like the definition in the baseline, is the noun phrase under consideration, while C k is an existing coreferential cluster. Each cluster could be referred by a reference noun phrase NP i , a certain element of the cluster. A cluster would probably contain more than one reference NPs and thus may have multiple associated instances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For a training instance, the label is positive if NP j is annotated as belonging to C k , or negative if otherwise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In our system, each instance is represented as a set of 24 features as shown in Table 2 . The features are supposed to capture the properties of NP j and C k as well as their relationships. In the table we divide the features into two groups, one describing NP j and NP i and the other describing NP j and C k . For the former group, we just use the same features set as in the baseline system, while for the latter, we introduce 6 more features:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 87, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Cluster NumAgree, Cluster GenAgree and Cluster SemAgree: These three features mark the compatibility of NP j and C k in number, gender and semantic agreement, respectively. If NP j mismatches the agreement with any element in C k , the corresponding feature is set to 0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Cluster Length: The number of NPs in the cluster C k . This feature reflects the global salience of an entity in the sense that the more frequently an entity is mentioned, the more important it would probably be in text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Cluster StrSim: This feature marks the string similarity between NP j and C k . Suppose S N P j is the token set of NP j , we compute the feature value using the similarity function Str Similarity(", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "S N P j , S C k ), where S C k = N P i \u2208C k S N P i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Cluster StrLNPSim: It marks the string matching degree of NP j and the noun phrase in C k with the most number of tokens. The intuition here is that the NP with the longest string would probably bear richer description information of the referent than other elements in the cluster. The feature is calculated using the similarity function Str Similarity(", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "S N P j , S N P k ), where N P k = arg max N P i \u2208C k |S N P i | 3.2 Training procedure", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Given an annotated training document, we process the noun phrases from beginning to end. For each anaphoric noun phrase NP j , we consider its preceding coreferential clusters from right to left 3 . For each cluster, we create only one instance by taking the last NP in the cluster as the reference NP. The process will not terminate until the cluster to which NP j belongs is found.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To make it clear, consider the example in Table 1 again. For the noun phrase [ 7 This mutant], the annotated preceding coreferential clusters are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "C1: { . . . , NP 2 , NP 6 } C2: { . . . , NP 5 } C3: { NP 1 , NP 4 } C4: { . . . , NP 3 }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Thus three training instances are generated:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "i { NP 7 , C1, NP 6 } i { NP 7 , C2, NP 5 } i { NP 7 , C3, NP 4 }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Among them, the first two instances are labelled as negative while the last one is positive.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "After the training instances are ready, we use C5.0 learning algorithm to learn a decision tree classifier as in the baseline approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Instance representation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The resolution procedure is the counterpart of the training procedure. Given a testing document, for each encountered noun phrase, NP j , we create a set of instances by pairing NP j with each cluster found previously. The instances are presented to the learned decision tree to judge the likelihood that NP j is linked to a cluster. The resolution algorithm is given in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 371, |
|
"end": 379, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Resolution procedure", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "As described in the algorithm, for each cluster under consideration, we create multiple instances by using every NP in the cluster as the reference NP. The confidence value of the cluster algorithm RESOLVE (a testing document d) ClusterSet = \u2205; //suppose d has N markable NPs; for j = 1 to N foreach cluster in ClusterSet", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resolution procedure", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "CF cluster = max N Pi\u2208cluster CF i(N P j ,cluster,N P i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resolution procedure", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "select a proper cluster, BestCluster, according to a ceterin cluster selection strategy;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resolution procedure", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "if BestCluster != NULL BestCluster = BestCluster \u222a {N P j }; else //create a new cluster NewCluster = { N P j }; ClusterSet = ClusterSet \u222a {NewCluster }; Figure 1 : The clusters identification algorithm is the maximal confidence value of its instances. Similar to the baseline system, two cluster selection strategies, i.e. MRF and BF, could be applied to link NP j to a proper cluster. For MRF strategy, NP j is linked to the closest cluster with confidence value above 0.5, while for BF, it is linked to the cluster with the maximal confidence value (above 0.5).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 162, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Resolution procedure", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "As noted above, the idea of the NP-Cluster based approach is different from the NP-NP based approach. However, due to the fact that in our approach a cluster is processed based on its reference NPs, the framework of our approach could be reduced to the NP-NP based framework if the cluster-related features were removed. From this point of view, this approach could be considered as an extension of the baseline approach by applying additional cluster features as the properties of NP i . These features provide richer description information of the entity, and thus make the coreference relationship between two NPs more apparent. In this way, both rules learning and coreference determination capabilities of the original approach could be enhanced. Table 3 : The performance of different coreference resolution systems consists of totally 228 MEDLINE abstracts selected from the GENIA data set. The average length of the documents in collection is 244 words. One characteristic of the bio-literature is that pronouns only occupy about 3% among all the NPs. This ratio is quite low compared to that in newswire domain (e.g. above 10% for MUC data set).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 752, |
|
"end": 759, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison of NP-NP and NP-Cluster based approaches", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "A pipeline of NLP components is applied to pre-process an input raw text. Among them, NE recognition, part-of-speech tagging and text chunking adopt the same HMM based engine with error-driven learning capability (Zhou and Su, 2002) . The NE recognition component trained on GENIA (Shen et al., 2003) can recognize up to 23 common biomedical entity types with an overall performance of 66.1 Fmeasure (P=66.5% R=65.7%). In addition, to remove the apparent non-anaphors (e.g., embedded proper nouns) in advance, a heuristicbased non-anaphoricity identification module is applied, which successfully removes 50.0% nonanaphors with a precision of 83.5% for our data set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 232, |
|
"text": "(Zhou and Su, 2002)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 300, |
|
"text": "GENIA (Shen et al., 2003)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our experiments were done on first 100 documents from the annotated corpus, among them 70 for training and the other 30 for testing. Throughout these experiments, default learning parameters were applied in the C5.0 algorithm. The recall and precision were calculated automatically according to the scoring scheme proposed by Vilain et al. (1995) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 326, |
|
"end": 346, |
|
"text": "Vilain et al. (1995)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In Table 3 we compared the performance of different coreference resolution systems. The first line summarizes the results of the baseline system using traditional NP-NP based approach as described in Section 2. Using BF strategy, Baseline obtains 80.3% recall and 77.5% precision. These results are better than the work by Castano et al. (2002) and Yang et al. (2004) , which were also tested on the MEDLINE data set and reported a F-measure of about 74% and 69%, respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 323, |
|
"end": 344, |
|
"text": "Castano et al. (2002)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 349, |
|
"end": 367, |
|
"text": "Yang et al. (2004)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In the experiments, we evaluated another NP-NP based system, AllAnte. It adopts a similar learning framework as Baseline except that during training it generates the positive instances by paring an NP with all its antecedents instead of only the closest one. The system attempts to use such an instance selection strategy to incorporate the information from coreferential clusters. But the results are nevertheless disappointing: although this strategy boosts the recall by 5.4%, the precision drops considerably by above 6% at the same time. The overall F-measure is even lower than the baseline systems. The last line of Table 3 demonstrates the results of our NP-Cluster based approach. For BF strategy, the system achieves 84.9% recall and 78.8% precision. As opposed to the baseline system, the recall rises by 4.6% while the precision still gains slightly by 1.3%. Overall, we observe the increase of F-measure by 2.8%.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 623, |
|
"end": 630, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The results in Table 3 also indicate that the BF strategy is superior to the MRF strategy. A similar finding was also reported by Ng and Cardie (2002b) in the MUC data set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 151, |
|
"text": "Ng and Cardie (2002b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 22, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To gain insight into the difference in the performance between our NP-Cluster based system and the NP-NP based system, we compared the decision trees generated in the two systems in Figure 2 . In both trees, the string-similarity features occur on the top portion, which supports the arguments by (Strube et al., 2002) and (Yang et al., 2004) that string-matching is a crucial factor for NP coreference resolution. As shown in the figure, the feature StrSim 1 in left tree is completely replaced by the Cluster StrSim and Cluster StrLNPSim in the right tree, which means that matching the tokens with a cluster is more reliable than with a single NP. Moreover, the cluster length will also be checked when the NP under consideration has low similarity against a cluster. These evidences prove that the information from clusters is quite important for the coreference resolution on the data set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 297, |
|
"end": 318, |
|
"text": "(Strube et al., 2002)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 342, |
|
"text": "(Yang et al., 2004)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 190, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The decision tree visualizes the importance of the features for a data set. However, the tree is learned from the documents where coreferential clusters are correctly annotated. During resolu-HeadMatch = 0: :...NameAlias = 1: 1 (22/1) : NameAlias = 0: : :...Appositive = 0: 0 (13095/265) : Appositive = 1: 1 (15/4) HeadMatch = 1: :...StrSim_1 > 71:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": ":...DemoNP_1 = 0: 1 (615/29) : DemoNP_1 = 1: :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": ":...NumAgree = 0: 0 (5) : NumAgree = 1: 1 (26) StrSim_1 <= 71: :...DemoNP_2 = 1: 1 (12/2) DemoNP_2 = 0: :...StrSim_2 <= 77: 0 (144/17) StrSim_2 > 77: :...StrSim_1 <= 33: 0 (42/11) StrSim_1 > 33: 1 (38/11) HeadMatch = 1: :...Cluster_StrSim > 66: 1 (663/36) : Cluster_StrSim <= 66: :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": ":...StrSim_2 <= 85: 0 (140/14) :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "StrSim_2 > 85: :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": ":...Cluster_StrLNPSim > 50: 1 (16/1) :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Cluster_StrLNPSim <= 50: :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": ":...Cluster_Length <= 5: 0 (59/17) :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Cluster_Length > 5: 1 (4) HeadMatch = 0: :...NameAlias = 1: 1 (22/1) NameAlias = 0: :...Appositive = 1: 1 (15/4) Appositive = 0: :...StrSim_2 <= 54: :.. StrSim_2 > 54: :.. 84.9 78.9 81.8 f 1\u223c21 , f 23 , f 24 , f 22 84.9 78.8 81.7 Table 4 : Performance using combined features (f i refers to the i(th) feature listed in Table 2) tion, unfortunately, the found clusters are usually not completely correct, and as a result the features important in training data may not be also helpful for testing data. Therefore, in the experiments we were concerned about which features really matter for the real coreference resolution. For this purpose, we tested our system using different features and evaluated their performance in Table 4 . Here we just considered feature Cluster Length (f 22 ), Cluster StrSim (f 23 ) and Cluster StrLNPSim (f 24 ), as Figure 2 has indicated that among the cluster-related features only these three are possibly effective for resolution. Throughout the experiment, the Best-First strategy was applied.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 237, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 319, |
|
"end": 327, |
|
"text": "Table 2)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 721, |
|
"end": 728, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 844, |
|
"end": 852, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As illustrated in the table, we could observe that:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "1. Without the three features, the system is equivalent to the baseline system in terms of the same recall and precision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "2. Cluster StrSim (f 23 ) is the most effective as it contributes most to the system performance. Simply using this feature boosts the F-measure by 2.7%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "3. Cluster StrLNPSim (f 24 ) is also effective by improving the F-measure by 2.1% alone.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "When combined with f 23 , it leads to the best F-measure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and discussions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Cluster Length (f 22 ) only brings 0.1% Fmeasure improvement. It could barely increase, or even worse, reduces the Fmeasure when used together with the the other two features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To our knowledge, our work is the first supervised-learning based attempt to do coreference resolution by exploring the relationship between an NP and coreferential clusters. In the heuristic salience-based algorithm for pronoun resolution, Lappin and Leass (1994) introduce a procedure for identifying anaphorically linked NP as a cluster for which a global salience value is computed as the sum of the salience values of its elements. Cardie and Wagstaff (1999) have proposed an unsupervised approach which also incorporates cluster information into consideration. Their approach uses hard constraints to preclude the link of an NP to a cluster mismatching the number, gender or semantic agreements, while our approach takes these agreements together with other features (e.g. cluster-length, string-matching degree,etc) as preference factors for cluster selection. Besides, the idea of clustering can be seen in the research of cross-document coreference, where NPs with high context similarity would be chained together based on certain clustering methods (Bagga and Biermann, 1998; Gooi and Allan, 2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1060, |
|
"end": 1086, |
|
"text": "(Bagga and Biermann, 1998;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1087, |
|
"end": 1108, |
|
"text": "Gooi and Allan, 2004)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper we have proposed a supervised learning-based approach to coreference resolution. Rather than mining the coreferential relationship between NP pairs as in conventional approaches, our approach does resolution by exploring the relationships between an NP and the coreferential clusters. Compared to individual NPs, coreferential clusters provide more information for rules learning and reference determination. In the paper, we first introduced the conventional NP-NP based approach and analyzed its limitation. Then we described in details the framework of our NP-Cluster based approach, including the instance representation, training and resolution procedures. We evaluated our approach in the biomedical domain, and the experimental results showed that our approach outperforms the NP-NP based approach in both recall (4.6%) and precision (1.3%).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "While our approach achieves better performance, there is still room for further improvement. For example, the approach just resolves an NP using the cluster information available so far. Nevertheless, the text after the NP would probably give important supplementary information of the clusters. The ignorance of such information may affect the correct resolution of the NP. In the future work, we plan to work out more robust clustering algorithm to link an NP to a globally best cluster.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this paper the term \"cluster\" can be interchangeably used as \"chain\", while the former better emphasizes the equivalence property of coreference relationship.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The confidence value is obtained by using the smoothed ratio p+1 t+2 , where p is the number of positive instances and t is the total number of instances contained in the corresponding leaf node.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The NP-Cluster based approach Similar to the baseline approach, our approach also recasts coreference resolution as a binary classification problem. The difference, however, is that our approach aims to learn a classifier which would select the most preferred cluster, instead of the most preferred antecedent, for an encountered NP in text. We will give the framework of the approach, including the instance rep-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We define the position of a cluster as the position of the last NP in the cluster.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The annotation scheme and samples are available in http://nlp.i2r.a-star.edu.sg/resources/GENIAcoreference", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": {}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "The resulting decision trees for the NP-NP and NP-Cluster based approaches", |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null, |
|
"text": "An Example from the data set In the above text, [ 1 A mutant of KBF1/p50], [", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td/><td>ProperNP 1</td><td>1 if NP j is a proper NP; else 0</td></tr><tr><td>6.</td><td>DefNP 2</td><td>1 if NP i is a definite NP; else 0</td></tr><tr><td>7.</td><td>DemoNP 2</td><td>1 if NP i starts with a demonstrative; else 0</td></tr><tr><td>8.</td><td>IndefNP 2</td><td>1 if NP i is an indefinite NP; else 0</td></tr><tr><td>9.</td><td>Pron 2</td><td>1 if NP i is a pronoun; else 0</td></tr><tr><td colspan=\"2\">10. ProperNP 2</td><td>1 if NP i is a proper NP; else 0</td></tr><tr><td colspan=\"2\">11. Appositive</td><td>1 if NP i and NP j are in an appositive structure; else 0</td></tr><tr><td colspan=\"2\">12. NameAlias</td><td>1 if NP i and NP j are in an alias of the other; else 0</td></tr><tr><td colspan=\"2\">13. GenderAgree</td><td>1 if NP i and NP j agree in gender; else 0</td></tr><tr><td colspan=\"2\">14. NumAgree</td><td>1 if NP i and NP j agree in number; else 0</td></tr><tr><td colspan=\"2\">15. SemanticAgree</td><td>1 if NP i and NP j agree in semantic class; else 0</td></tr><tr><td colspan=\"2\">16. HeadStrMatch</td><td>1 if NP i and NP j contain the same head string; else 0</td></tr><tr><td colspan=\"2\">17. StrSim 1</td><td>The string similarity of NP j against NP i</td></tr><tr><td colspan=\"2\">18. StrSim 2</td><td/></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "The string similarity of NP i against NP j Features describing the relationships between NP j and cluster C k 19. Cluster NumAgree 1 if C k and NP j agree in number; else 0 20. Cluster GenAgree 1 if C k and NP j agree in gender; else 0 21. Cluster SemAgree 1 if C k and NP j agree in semantic class; else 0 22. Cluster Length The number of elements contained in C k 23. Cluster StrSim The string similarity of NP j against C k 24. Cluster StrLNPSim The string similarity of NP j against the longest NP in C k", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td/><td/><td>MRF</td><td/><td/><td>BF</td><td/></tr><tr><td>Experiments</td><td>R</td><td>P</td><td>F</td><td>R</td><td>P</td><td>F</td></tr><tr><td>Baseline</td><td colspan=\"6\">80.2 77.4 78.8 80.3 77.5 78.9</td></tr><tr><td>AllAnte</td><td>84.4</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"4\">4.1 Data collection</td></tr><tr><td/><td/><td/><td colspan=\"4\">Our coreference resolution system is a compo-</td></tr><tr><td/><td/><td/><td colspan=\"4\">nent of our information extraction system in</td></tr><tr><td/><td/><td/><td colspan=\"4\">biomedical domain. For this purpose, an anno-tated coreference corpus have been built 4 , which</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "70.2 76.6 85.7 71.4 77.9 Our Approach 84.4 78.2 81.2 84.9 78.8 81.7", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |