Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P04-1017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:43:24.524834Z"
},
"title": "Improving Pronoun Resolution by Incorporating Coreferential Information of Candidates",
"authors": [
{
"first": "Xiaofeng",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Chew",
"middle": [],
"last": "Lim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {
"postCode": "117543",
"settlement": "Singapore"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Coreferential information of a candidate, such as the properties of its antecedents, is important for pronoun resolution because it reflects the salience of the candidate in the local discourse. Such information, however, is usually ignored in previous learning-based systems. In this paper we present a trainable model which incorporates coreferential information of candidates into pronoun resolution. Preliminary experiments show that our model will boost the resolution performance given the right antecedents of the candidates. We further discuss how to apply our model in real resolution where the antecedents of the candidate are found by a separate noun phrase resolution module. The experimental results show that our model still achieves better performance than the baseline.",
"pdf_parse": {
"paper_id": "P04-1017",
"_pdf_hash": "",
"abstract": [
{
"text": "Coreferential information of a candidate, such as the properties of its antecedents, is important for pronoun resolution because it reflects the salience of the candidate in the local discourse. Such information, however, is usually ignored in previous learning-based systems. In this paper we present a trainable model which incorporates coreferential information of candidates into pronoun resolution. Preliminary experiments show that our model will boost the resolution performance given the right antecedents of the candidates. We further discuss how to apply our model in real resolution where the antecedents of the candidate are found by a separate noun phrase resolution module. The experimental results show that our model still achieves better performance than the baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, supervised machine learning approaches have been widely explored in reference resolution and achieved considerable success (Ge et al., 1998; Soon et al., 2001; Ng and Cardie, 2002; Strube and Muller, 2003; Yang et al., 2003) . Most learning-based pronoun resolution systems determine the reference relationship between an anaphor and its antecedent candidate only from the properties of the pair. The knowledge about the context of anaphor and antecedent is nevertheless ignored. However, research in centering theory (Sidner, 1981; Grosz et al., 1983; Grosz et al., 1995; Tetreault, 2001) has revealed that the local focusing (or centering) also has a great effect on the processing of pronominal expressions. The choices of the antecedents of pronouns usually depend on the center of attention throughout the local discourse segment (Mitkov, 1999) .",
"cite_spans": [
{
"start": 140,
"end": 157,
"text": "(Ge et al., 1998;",
"ref_id": "BIBREF1"
},
{
"start": 158,
"end": 176,
"text": "Soon et al., 2001;",
"ref_id": "BIBREF13"
},
{
"start": 177,
"end": 197,
"text": "Ng and Cardie, 2002;",
"ref_id": "BIBREF9"
},
{
"start": 198,
"end": 222,
"text": "Strube and Muller, 2003;",
"ref_id": "BIBREF14"
},
{
"start": 223,
"end": 241,
"text": "Yang et al., 2003)",
"ref_id": "BIBREF18"
},
{
"start": 535,
"end": 549,
"text": "(Sidner, 1981;",
"ref_id": "BIBREF12"
},
{
"start": 550,
"end": 569,
"text": "Grosz et al., 1983;",
"ref_id": "BIBREF2"
},
{
"start": 570,
"end": 589,
"text": "Grosz et al., 1995;",
"ref_id": "BIBREF3"
},
{
"start": 590,
"end": 606,
"text": "Tetreault, 2001)",
"ref_id": "BIBREF16"
},
{
"start": 852,
"end": 866,
"text": "(Mitkov, 1999)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To determine the salience of a candidate in the local context, we may need to check the coreferential information of the candidate, such as the existence and properties of its antecedents. In fact, such information has been used for pronoun resolution in many heuristicbased systems. The S-List model (Strube, 1998) , for example, assumes that a co-referring candidate is a hearer-old discourse entity and is preferred to other hearer-new candidates. In the algorithms based on the centering theory (Brennan et al., 1987; Grosz et al., 1995) , if a candidate and its antecedent are the backwardlooking centers of two subsequent utterances respectively, the candidate would be the most preferred since the CONTINUE transition is always ranked higher than SHIFT or RETAIN.",
"cite_spans": [
{
"start": 301,
"end": 315,
"text": "(Strube, 1998)",
"ref_id": "BIBREF15"
},
{
"start": 499,
"end": 521,
"text": "(Brennan et al., 1987;",
"ref_id": "BIBREF0"
},
{
"start": 522,
"end": 541,
"text": "Grosz et al., 1995)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a supervised learning-based pronoun resolution system which incorporates coreferential information of candidates in a trainable model. For each candidate, we take into consideration the properties of its antecedents in terms of features (henceforth backward features), and use the supervised learning method to explore their influences on pronoun resolution. In the study, we start our exploration on the capability of the model by applying it in an ideal environment where the antecedents of the candidates are correctly identified and the backward features are optimally set. The experiments on MUC-6 (1995) and MUC-7 (1998) corpora show that incorporating coreferential information of candidates boosts the system performance significantly. Further, we apply our model in the real resolution where the antecedents of the candidates are provided by separate noun phrase resolution modules. The experimental results show that our model still outperforms the baseline, even with the low recall of the non-pronoun resolution module.",
"cite_spans": [
{
"start": 623,
"end": 635,
"text": "MUC-6 (1995)",
"ref_id": null
},
{
"start": 640,
"end": 652,
"text": "MUC-7 (1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remaining of this paper is organized as follows. Section 2 discusses the importance of the coreferential information for candidate evaluation. Section 3 introduces the baseline learning framework. Section 4 presents and evaluates the learning model which uses backward fea-tures to capture coreferential information, while Section 5 proposes how to apply the model in real resolution. Section 6 describes related research work. Finally, conclusion is given in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In pronoun resolution, the center of attention throughout the discourse segment is a very important factor for antecedent selection (Mitkov, 1999) . If a candidate is the focus (or center) of the local discourse, it would be selected as the antecedent with a high possibility. See the following example, <s> Gitano 1 has pulled off a clever illusion 2 with its 3 advertising 4 . <s> <s> T he campaign 5 gives its 6 clothes a youthful and trendy image to lure consumers into the store. <s> Table 1 : A text segment from MUC-6 data set",
"cite_spans": [
{
"start": 132,
"end": 146,
"text": "(Mitkov, 1999)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 489,
"end": 496,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Impact of Coreferential Information on Pronoun Resolution",
"sec_num": "2"
},
{
"text": "In the above text, the pronoun \"its 6 \" has several antecedent candidates, i.e., \"Gitano 1 \", \"a clever illusion 2 \", \"its 3 \", \"its advertising 4 \" and \"T he campaign 5 \". Without looking back, \"T he campaign 5 \" would be probably selected because of its syntactic role (Subject) and its distance to the anaphor. However, given the knowledge that the company Gitano is the focus of the local context and \"its 3 \" refers to \"Gitano 1 \", it would be clear that the pronoun \"its 6 \" should be resolved to \"its 3 \" and thus \"Gitano 1 \", rather than other competitors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Impact of Coreferential Information on Pronoun Resolution",
"sec_num": "2"
},
{
"text": "To determine whether a candidate is the \"focus\" entity, we should check how the status (e.g. grammatical functions) of the entity alternates in the local context. Therefore, it is necessary to track the NPs in the coreferential chain of the candidate. For example, the syntactic roles (i.e., subject) of the antecedents of \"its 3 \" would indicate that \"its 3 \" refers to the most salient entity in the discourse segment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Impact of Coreferential Information on Pronoun Resolution",
"sec_num": "2"
},
{
"text": "In our study, we keep the properties of the antecedents as features of the candidates, and use the supervised learning method to explore their influence on pronoun resolution. Actually, to determine the local focus, we only need to check the entities in a short discourse segment. That is, for a candidate, the number of its adjacent antecedents to be checked is limited. Therefore, we could evaluate the salience of a candidate by looking back only its closest antecedent instead of each element in its coreferential chain, with the assumption that the closest antecedent is able to provide sufficient information for the evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Impact of Coreferential Information on Pronoun Resolution",
"sec_num": "2"
},
{
"text": "3 The Baseline Learning Framework",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Impact of Coreferential Information on Pronoun Resolution",
"sec_num": "2"
},
{
"text": "Our baseline system adopts the common learning-based framework employed in the system by Soon et al. (2001) .",
"cite_spans": [
{
"start": 89,
"end": 107,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Impact of Coreferential Information on Pronoun Resolution",
"sec_num": "2"
},
{
"text": "In the learning framework, each training or testing instance takes the form of i {ana, candi }, where ana is the possible anaphor and candi is its antecedent candidate 1 . An instance is associated with a feature vector to describe their relationships. As listed in Table 2 , we only consider those knowledge-poor and domain-independent features which, although superficial, have been proved efficient for pronoun resolution in many previous systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 266,
"end": 273,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Impact of Coreferential Information on Pronoun Resolution",
"sec_num": "2"
},
{
"text": "During training, for each anaphor in a given text, a positive instance is created by paring the anaphor and its closest antecedent. Also a set of negative instances is formed by paring the anaphor and each of the intervening candidates. Based on the training instances, a binary classifier is generated using C5.0 learning algorithm (Quinlan, 1993) . During resolution, each possible anaphor ana, is paired in turn with each preceding antecedent candidate, candi, from right to left to form a testing instance. This instance is presented to the classifier, which will then return a positive or negative result indicating whether or not they are co-referent. The process terminates once an instance i {ana, candi } is labelled as positive, and ana will be resolved to candi in that case.",
"cite_spans": [
{
"start": 333,
"end": 348,
"text": "(Quinlan, 1993)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Impact of Coreferential Information on Pronoun Resolution",
"sec_num": "2"
},
{
"text": "The learning procedure in our model is similar to the above baseline method, except that for each candidate, we take into consideration its closest antecedent, if possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Learning Model Incorporating Coreferential Information",
"sec_num": "4"
},
{
"text": "During both training and testing, we adopt the same instance selection strategy as in the baseline model. The only difference, however, is the structure of the training or testing instances. Specifically, each instance in our model is composed of three elements like below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Structure",
"sec_num": "4.1"
},
{
"text": "Features describing the candidate (candi ) 1. candi DefNp 1 if candi is a definite NP; else 0 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Structure",
"sec_num": "4.1"
},
{
"text": "candi DemoNP 1 if candi is an indefinite NP; else 0 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Structure",
"sec_num": "4.1"
},
{
"text": "candi Pron 1 if candi is a pronoun; else 0 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Structure",
"sec_num": "4.1"
},
{
"text": "candi ProperNP 1 if candi is a proper name; else 0 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Structure",
"sec_num": "4.1"
},
{
"text": "candi NE Type 1 if candi is an \"organization\" named-entity; 2 if \"person\", 3 if other types, 0 if not a NE 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Structure",
"sec_num": "4.1"
},
{
"text": "candi Human the likelihood (0-100) that candi is a human entity (obtained from WordNet) 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Structure",
"sec_num": "4.1"
},
{
"text": "candi FirstNPInSent 1 if candi is the first NP in the sentence where it occurs 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Structure",
"sec_num": "4.1"
},
{
"text": "candi Nearest 1 if candi is the candidate nearest to the anaphor; else 0 9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Structure",
"sec_num": "4.1"
},
{
"text": "candi SubjNP 1 if candi is the subject of the sentence it occurs; else 0 Features describing the anaphor (ana): 10. ana Reflexive 1 if ana is a reflexive pronoun; else 0 11. ana Type 1 if ana is a third-person pronoun (he, she,. . . ); 2 if a single neuter pronoun (it,. . . ); 3 if a plural neuter pronoun (they,. . . ); 4 if other types Features describing the relationships between candi and ana: 12. SentDist Distance between candi and ana in sentences 13. ParaDist",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instance Structure",
"sec_num": "4.1"
},
{
"text": "Distance between candi and ana in paragraphs 14. CollPattern 1 if candi has an identical collocation pattern with ana; else 0 Table 2 : Feature set for the baseline pronoun resolution system i {ana, candi, ante-of-candi} where ana and candi, similar to the definition in the baseline model, are the anaphor and one of its candidates, respectively. The new added element in the instance definition, anteof-candi, is the possible closest antecedent of candi in its coreferential chain. The ante-ofcandi is set to NIL in the case when candi has no antecedent.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Instance Structure",
"sec_num": "4.1"
},
{
"text": "Consider the example in Table 1 again. For the pronoun \"it 6 \", three training instances will be generated, namely, i {its 6 , T he compaign 5 , NIL}, i {its 6 , its advertising 4 , NIL}, and i {its 6 , its 3 , Gitano 1 }.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Instance Structure",
"sec_num": "4.1"
},
{
"text": "In addition to the features adopted in the baseline system, we introduce a set of backward features to describe the element ante-of-candi. The ten features (15-24) are listed in Table 3 with their respective possible values.",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 185,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Backward Features",
"sec_num": "4.2"
},
{
"text": "Like feature 1-9, features 15-22 describe the lexical, grammatical and semantic properties of ante-of-candi. The inclusion of the two features Apposition (23) and candi NoAntecedent (24) is inspired by the work of Strube (1998) . The feature Apposition marks whether or not candi and ante-of-candi occur in the same appositive structure. The underlying purpose of this feature is to capture the pattern that proper names are accompanied by an appositive. The entity with such a pattern may often be related to the hearers' knowledge and has low preference. The feature candi NoAntecedent marks whether or not a candidate has a valid antecedent in the preceding text. As stipulated in Strube's work, co-referring expressions belong to hearer-old entities and therefore have higher preference than other candidates. When the feature is assigned value 1, all the other backward features (15-23) are set to 0.",
"cite_spans": [
{
"start": 214,
"end": 227,
"text": "Strube (1998)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Backward Features",
"sec_num": "4.2"
},
{
"text": "In our study we used the standard MUC-6 and MUC-7 coreference corpora. In each data set, 30 \"dry-run\" documents were annotated for training as well as 20-30 documents for testing. The raw documents were preprocessed by a pipeline of automatic NLP components (e.g. NP chunker, part-of-speech tagger, named-entity recognizer) to determine the boundary of the NPs, and to provide necessary information for feature calculation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "4.3"
},
{
"text": "In an attempt to investigate the capability of our model, we evaluated the model in an optimal environment where the closest antecedent of each candidate is correctly identified. MUC-6 and MUC-7 can serve this purpose quite well; the annotated coreference information in the data sets enables us to obtain the correct closest In the next section we will further discuss how to apply our model into the real resolution. Table 4 shows the performance of different systems for resolving the pronominal anaphors 2 in MUC-6 and MUC-7. Default learning parameters for C5.0 were used throughout the experiments. In this table we evaluated the performance based on two kinds of measurements:",
"cite_spans": [],
"ref_spans": [
{
"start": 419,
"end": 426,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "4.3"
},
{
"text": "\u2022 \"Recall-and-Precision\": The above metrics evaluate the capability of the learned classifier in identifying positive instances 3 . F-measure is the harmonic mean of the two measurements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "4.3"
},
{
"text": "\u2022 \"Success\": Success = #anaphors resolved correctly",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "4.3"
},
{
"text": "The metric 4 directly reflects the pronoun resolution capability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "#total anaphors",
"sec_num": null
},
{
"text": "The first and second lines of Table 4 compare the performance of the baseline system (Base-ante-candi_SubjNP = 1: 1 (49/5) ante-candi_SubjNP = 0: :..candi_SubjNP = 1:",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "#total anaphors",
"sec_num": null
},
{
"text": ":..SentDist = 2: 0 (3) : SentDist = 0: : :..candi_Human > 0: 1 (39/2) : : candi_Human <= 0: : : :..candi_NoAntecedent = 0: 1 (8/3) : : candi_NoAntecedent = 1: 0 (3) : SentDist = 1: : :..ante-candi_Human <= 50 : 0 (4) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "#total anaphors",
"sec_num": null
},
{
"text": "ante-candi_Human > 50 : 1 (10/2) : candi_SubjNP = 0: :..candi_Pron = 1: 1 (32/7) candi_Pron = 0: :..candi_NoAntecedent = 1: :..candi_FirstNPInSent = 1: 1 (6/2) : candi_FirstNPInSent = 0: ... candi_NoAntecedent = 0: ... line) and our system (Optimal ), where DT pron and DT pron\u2212opt are the classifiers learned in the two systems, respectively. The results indicate that our system outperforms the baseline system significantly. Compared with Baseline, Optimal achieves gains in both recall (6.4% for MUC-6 and 4.1% for MUC-7) and precision (1.3% for MUC-6 and 9.0% for MUC-7). For Success, we also observe an apparent improvement by 4.7% (MUC-6) and 3.5% (MUC-7). Figure 1 shows the portion of the pruned decision tree learned for MUC-6 data set. It visualizes the importance of the backward features for the pronoun resolution on the data set. From Table 4 : Results of different systems for pronoun resolution on MUC-6 and MUC-7 (*Here we only list backward feature assigner for pronominal candidates. In RealResolve-1 to RealResolve-4, the backward features for non-pronominal candidates are all found by DT non\u2212pron .) the tree we could find that:",
"cite_spans": [],
"ref_spans": [
{
"start": 664,
"end": 672,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 850,
"end": 857,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "#total anaphors",
"sec_num": null
},
{
"text": "1.) Feature ante-candi SubjNP is of the most importance as the root feature of the tree. The decision tree would first examine the syntactic role of a candidate's antecedent, followed by that of the candidate. This nicely proves our assumption that the properties of the antecedents of the candidates provide very important information for the candidate evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "#total anaphors",
"sec_num": null
},
{
"text": "candi SubjNP rank top in the decision tree. That is, for the reference determination, the subject roles of the candidate's referent within a discourse segment will be checked in the first place. This finding supports well the suggestion in centering theory that the grammatical relations should be used as the key criteria to rank forward-looking centers in the process of focus tracking (Brennan et al., 1987; Grosz et al., 1995) .",
"cite_spans": [
{
"start": 388,
"end": 410,
"text": "(Brennan et al., 1987;",
"ref_id": "BIBREF0"
},
{
"start": 411,
"end": 430,
"text": "Grosz et al., 1995)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2.) Both features ante-candi SubjNP and",
"sec_num": null
},
{
"text": "3.) candi Pron and candi NoAntecedent are to be examined in the cases when the subject-role checking fails, which confirms the hypothesis in the S-List model by Strube (1998) that co-refereing candidates would have higher preference than other candidates in the pronoun resolution.",
"cite_spans": [
{
"start": 161,
"end": 174,
"text": "Strube (1998)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2.) Both features ante-candi SubjNP and",
"sec_num": null
},
{
"text": "In Section 4 we explored the effectiveness of the backward feature for pronoun resolution. In those experiments our model was tested in an ideal environment where the closest antecedent of a candidate can be identified correctly when generating the feature vector. However, during real resolution such coreferential information is not available, and thus a separate module has algorithm PRON-RESOLVE input:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applying the Model in Real Resolution",
"sec_num": "5"
},
{
"text": "DT non\u2212pron : classifier for resolving non-pronouns DT pron : classifier for resolving pronouns begin: M 1..n := the valid markables in the given document Ante[1..n] := 0 Figure 2 : The pronoun resolution algorithm by incorporating coreferential information of candidates to be employed to obtain the closest antecedent for a candidate. We describe the algorithm in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 171,
"end": 179,
"text": "Figure 2",
"ref_id": null
},
{
"start": 366,
"end": 374,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Applying the Model in Real Resolution",
"sec_num": "5"
},
{
"text": "for i = 1 to N for j = i -1 downto 0 if (M i is a non-pron and DT non\u2212pron (i{M i , M j }) == + ) or (M i is a pron and DT pron (i{M i , M j , Ante[j]}) == +) then Ante[i] := M j break return Ante",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applying the Model in Real Resolution",
"sec_num": "5"
},
{
"text": "The algorithm takes as input two classifiers, one for the non-pronoun resolution and the other for pronoun resolution. Given a testing document, the antecedent of each NP is identified using one of these two classifiers, depending on the type of NP. Although a separate nonpronoun resolution module is required for the pronoun resolution task, this is usually not a big problem as these two modules are often integrated in coreference resolution systems. We just use the results of the one module to improve the performance of the other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applying the Model in Real Resolution",
"sec_num": "5"
},
{
"text": "Procedures For a pronominal candidate, its antecedent can be obtained by simply using DT pron\u2212opt . For Training Procedure: T1. Train a non-pronoun resolution classifier DT non\u2212pron and a pronoun resolution classifier DT pron , using the baseline learning framework (without backward features). T2. Apply DT non\u2212pron and DT pron to identify the antecedent of each non-pronominal and pronominal markable, respectively, in a given document. T3. Go through the document again. Generate instances with backward features assigned using the antecedent information obtained in T2. T4. Train a new pronoun resolution classifier DT pron on the instances generated in T3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Training and Testing",
"sec_num": "5.1"
},
{
"text": "Testing Procedure: R1. For each given document, do T2\u223cT3. R2. Resolve pronouns by applying DT pron . Table 5 : New training and testing procedures a non-pronominal candidate, we built a nonpronoun resolution module to identify its antecedent. The module is a duplicate of the NP coreference resolution system by Soon et al. (2001) 5 , which uses the similar learning framework as described in Section 3. In this way, we could do pronoun resolution just by running PRON-RESOLVE(DT non\u2212pron , DT pron\u2212opt ), where DT non\u2212pron is the classifier of the non-pronoun resolution module.",
"cite_spans": [
{
"start": 312,
"end": 330,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 101,
"end": 108,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "New Training and Testing",
"sec_num": "5.1"
},
{
"text": "One problem, however, is that DT pron\u2212opt is trained on the instances whose backward features are correctly assigned. During real resolution, the antecedent of a candidate is found by DT non\u2212pron or DT pron\u2212opt , and the backward feature values are not always correct. Indeed, for most noun phrase resolution systems, the recall is not very high. The antecedent sometimes can not be found, or is not the closest one in the preceding coreferential chain. Consequently, the classifier trained on the \"perfect\" feature vectors would probably fail to output anticipated results on the noisy data during real resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Training and Testing",
"sec_num": "5.1"
},
{
"text": "Thus we modify the training and testing procedures of the system. For both training and testing instances, we assign the backward feature values based on the results from separate NP resolution modules. The detailed procedures are described in Table 5 . 5 Details of the features can be found in Soon et al. (2001) algorithm REFINE-CLASSIFIER begin: DT 1 pron := DT pron for i = 1 to \u221e Use DT i pron to update the antecedents of pronominal candidates and the corresponding backward features;",
"cite_spans": [
{
"start": 254,
"end": 255,
"text": "5",
"ref_id": null
},
{
"start": 296,
"end": 314,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 244,
"end": 251,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "New Training and Testing",
"sec_num": "5.1"
},
{
"text": "Train DT i+1 pron based on the updated training instances;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Training and Testing",
"sec_num": "5.1"
},
{
"text": "if DT i+1 pron is not better than DT i pron then break; return DT i pron Figure 3 : The classifier refining algorithm",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 81,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "New Training and Testing",
"sec_num": "5.1"
},
{
"text": "The idea behind our approach is to train and test the pronoun resolution classifier on instances with feature values set in a consistent way. Here the purpose of DT pron and DT non\u2212pron is to provide backward feature values for training and testing instances. From this point of view, the two modules could be thought of as a preprocessing component of our pronoun resolution system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Training and Testing",
"sec_num": "5.1"
},
{
"text": "If the classifier DT pron outperforms DT pron as expected, we can employ DT pron in place of DT pron to generate backward features for pronominal candidates, and then train a classifier DT pron based on the updated training instances. Since DT pron produces more correct feature values than DT pron , we could expect that DT pron will not be worse, if not better, than DT pron . Such a process could be repeated to refine the pronoun resolution classifier. The algorithm is described in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 487,
"end": 495,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classifier Refining",
"sec_num": "5.2"
},
{
"text": "In algorithm REFINE-CLASSIFIER, the iteration terminates when the new trained classifier DT i+1 pron provides no further improvement than DT i pron . In this case, we can replace DT i+1 pron by DT i pron during the i+1(th) testing procedure. That means, by simply running PRON-RESOLVE(DT non\u2212pron ,DT i pron ), we can use for both backward feature computation and instance classification tasks, rather than applying DT pron and DT pron subsequently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier Refining",
"sec_num": "5.2"
},
{
"text": "In the experiments we evaluated the performance of our model in real pronoun resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.3"
},
{
"text": "The performance of our model depends on the performance of the non-pronoun resolution classifier, DT non\u2212pron . Hence we first examined the coreference resolution capability of DT non\u2212pron based on the standard scoring scheme by Vilain et al. (1995) . For MUC-6, the module obtains 62.2% recall and 78.8% precision, while for MUC-7, it obtains 50.1% recall and 75.4% precision. The poor recall and comparatively high precision reflect the capability of the state-ofthe-art learning-based NP resolution systems.",
"cite_spans": [
{
"start": 229,
"end": 249,
"text": "Vilain et al. (1995)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.3"
},
{
"text": "The third block of Table 4 summarizes the performance of the classifier DT pron\u2212opt in real resolution. In the systems RealResolve-1 and RealResolve-2, the antecedents of pronominal candidates are found by DT pron\u2212opt and DT pron respectively, while in both systems the antecedents of non-pronominal candidates are by DT non\u2212pron . As shown in the table, compared with the Optimal where the backward features of testing instances are optimally assigned, the recall rates of two systems drop largely by 7.8% for MUC-6 and by about 14% for MUC-7. The scores of recall are even lower than those of Baseline. As a result, in comparison with Optimal, we see the degrade of the F-measure and the success rate, which confirms our hypothesis that the classifier learned on perfect training instances would probably not perform well on the noisy testing instances.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.3"
},
{
"text": "The system RealResolve-3 listed in the fifth line of the table uses the classifier trained and tested on instances whose backward features are assigned according to the results from DT non\u2212pron and DT pron . From the table we can find that: (1) Compared with Baseline, the system produces gains in recall (2.1% for MUC-6 and 2.8% for MUC-7) with no significant loss in precision. Overall, we observe the increase in F-measure for both data sets. If measured by Success, the improvement is more apparent by 4.7% (MUC-6) and 1.8% (MUC-7). (2) Compared with RealResolve-1(2), the performance decrease of RealResolve-3 against Optimal is not so large. Especially for MUC-6, the system obtains a success rate as high as Optimal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.3"
},
{
"text": "The above results show that our model can be successfully applied in the real pronoun resolution task, even given the low recall of the current non-pronoun resolution module. This should be owed to the fact that for a candidate, its adjacent antecedents, even not the closest one, could give clues to reflect its salience in the local discourse. That is, the model prefers a high precision to a high recall, which copes well with the capability of the existing non-pronoun resolution module.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.3"
},
{
"text": "In our experiments we also tested the classifier refining algorithm described in Figure 3 . We found that for both MUC-6 and MUC-7 data set, the algorithm terminated in the second round. The comparison of DT 2 pron and DT 1 pron (i.e. DT pron ) showed that these two trees were exactly the same. The algorithm converges fast probably because in the data set, most of the antecedent candidates are non-pronouns (89.1% for MUC-6 and 83.7% for MUC-7). Consequently, the ratio of the training instances with backward features changed may be not substantial enough to affect the classifier generation. Although the algorithm provided no further refinement for DT pron , we can use DT pron , as suggested in Section 5.2, to calculate backward features and classify instances by running PRON-RESOLVE(DT non\u2212pron , DT pron ). The results of such a system, RealResolve-4, are listed in the last line of Table 4 . For both MUC-6 and MUC-7, RealResolve-4 obtains exactly the same performance as RealResolve-3.",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 89,
"text": "Figure 3",
"ref_id": null
},
{
"start": 894,
"end": 901,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5.3"
},
{
"text": "To our knowledge, our work is the first effort that systematically explores the influence of coreferential information of candidates on pronoun resolution in learning-based ways. Iida et al. (2003) also take into consideration the contextual clues in their coreference resolution system, by using two features to reflect the ranking order of a candidate in Salience Reference List (SRL). However, similar to common centering models, in their system the ranking of entities in SRL is also heuristic-based.",
"cite_spans": [
{
"start": 179,
"end": 197,
"text": "Iida et al. (2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "The coreferential chain length of a candidate, or its variants such as occurrence frequency and TFIDF, has been used as a salience factor in some learning-based reference resolution systems (Iida et al., 2003; Mitkov, 1998; Paul et al., 1999; Strube and Muller, 2003) . However, for an entity, the coreferential length only reflects its global salience in the whole text(s), instead of the local salience in a discourse segment which is nevertheless more informative for pronoun resolution. Moreover, during resolution, the found coreferential length of an entity is often incomplete, and thus the obtained length value is usually inaccurate for the salience evaluation.",
"cite_spans": [
{
"start": 190,
"end": 209,
"text": "(Iida et al., 2003;",
"ref_id": "BIBREF4"
},
{
"start": 210,
"end": 223,
"text": "Mitkov, 1998;",
"ref_id": "BIBREF5"
},
{
"start": 224,
"end": 242,
"text": "Paul et al., 1999;",
"ref_id": "BIBREF10"
},
{
"start": 243,
"end": 267,
"text": "Strube and Muller, 2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In this paper we have proposed a model which incorporates coreferential information of candi-dates to improve pronoun resolution. When evaluating a candidate, the model considers its adjacent antecedent by describing its properties in terms of backward features. We first examined the effectiveness of the model by applying it in an optimal environment where the closest antecedent of a candidate is obtained correctly. The experiments show that it boosts the success rate of the baseline system for both MUC-6 (4.7%) and MUC-7 (3.5%). Then we proposed how to apply our model in the real resolution where the antecedent of a non-pronoun is found by an additional non-pronoun resolution module. Our model can still produce Success improvement (4.7% for MUC-6 and 1.8% for MUC-7) against the baseline system, despite the low recall of the non-pronoun resolution module.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "In the current work we restrict our study only to pronoun resolution. In fact, the coreferential information of candidates is expected to be also helpful for non-pronoun resolution. We would like to investigate the influence of the coreferential factors on general NP reference resolution in our future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "In our study candidates are filtered by checking the gender, number and animacy agreements in advance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The first and second person pronouns are discarded in our study.3 The testing instances are collected in the same ways as the training instances.4 In the experiments, an anaphor is considered correctly resolved only if the found antecedent is in the same coreferential chain of the anaphor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A centering approach to pronouns",
"authors": [
{
"first": "S",
"middle": [],
"last": "Brennan",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Pollard",
"suffix": ""
}
],
"year": 1987,
"venue": "Proceedings of the 25th Annual Meeting of the Association for Compuational Linguistics",
"volume": "",
"issue": "",
"pages": "155--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Brennan, M. Friedman, and C. Pollard. 1987. A centering approach to pronouns. In Proceedings of the 25th Annual Meeting of the Association for Compuational Linguis- tics, pages 155-162.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A statistical approach to anaphora resolution",
"authors": [
{
"first": "N",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 6th Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Ge, J. Hale, and E. Charniak. 1998. A statistical approach to anaphora resolution. In Proceedings of the 6th Workshop on Very Large Corpora.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Providing a unified account of definite noun phrases in discourse",
"authors": [
{
"first": "B",
"middle": [],
"last": "Grosz",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Weinstein",
"suffix": ""
}
],
"year": 1983,
"venue": "Proceedings of the 21st Annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "44--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Grosz, A. Joshi, and S. Weinstein. 1983. Providing a unified account of definite noun phrases in discourse. In Proceedings of the 21st Annual meeting of the Association for Computational Linguistics, pages 44-50.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Centering: a framework for modeling the local coherence of discourse",
"authors": [
{
"first": "B",
"middle": [],
"last": "Grosz",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Weinstein",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "21",
"issue": "2",
"pages": "203--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Grosz, A. Joshi, and S. Weinstein. 1995. Centering: a framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203-225.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Incorporating contextual cues in trainable models for coreference resolution",
"authors": [
{
"first": "R",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Takamura",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 10th Conference of EACL, Workshop \"The Computational Treatment of Anaphora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Iida, K. Inui, H. Takamura, and Y. Mat- sumoto. 2003. Incorporating contextual cues in trainable models for coreference resolu- tion. In Proceedings of the 10th Confer- ence of EACL, Workshop \"The Computa- tional Treatment of Anaphora\".",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Robust pronoun resolution with limited knowledge",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 17th Int. Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "869--875",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Mitkov. 1998. Robust pronoun resolution with limited knowledge. In Proceedings of the 17th Int. Conference on Computational Lin- guistics, pages 869-875.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Anaphora resolution: The state of the art",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Mitkov. 1999. Anaphora resolution: The state of the art. Technical report, University of Wolverhampton.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Proceedings of the Sixth Message Understanding Conference",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MUC-6. 1995. Proceedings of the Sixth Message Understanding Conference. Morgan Kauf- mann Publishers, San Francisco, CA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Proceedings of the Seventh Message Understanding Conference",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MUC-7. 1998. Proceedings of the Seventh Message Understanding Conference. Morgan Kaufmann Publishers, San Francisco, CA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improving machine learning approaches to coreference resolution",
"authors": [
{
"first": "V",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Ng and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguis- tics, pages 104-111, Philadelphia.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Corpus-based anaphora resolution towards antecedent preference",
"authors": [
{
"first": "K",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, Workshop \"Coreference and It's Applications",
"volume": "",
"issue": "",
"pages": "47--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul, K. Yamamoto, and E. Sumita. 1999. Corpus-based anaphora resolution towards antecedent preference. In Proceedings of the 37th Annual Meeting of the Associa- tion for Computational Linguistics, Work- shop \"Coreference and It's Applications\", pages 47-52.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "C4.5: Programs for machine learning",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Quinlan. 1993. C4.5: Programs for ma- chine learning. Morgan Kaufmann Publish- ers, San Francisco, CA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Focusing for interpretation of pronouns",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sidner",
"suffix": ""
}
],
"year": 1981,
"venue": "American Journal of Computational Linguistics",
"volume": "7",
"issue": "4",
"pages": "217--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Sidner. 1981. Focusing for interpretation of pronouns. American Journal of Computa- tional Linguistics, 7(4):217-231.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [
{
"first": "W",
"middle": [],
"last": "Soon",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lim",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "4",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Soon, H. Ng, and D. Lim. 2001. A ma- chine learning approach to coreference reso- lution of noun phrases. Computational Lin- guistics, 27(4):521-544.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A machine learning approach to pronoun resolution in spoken dialogue",
"authors": [
{
"first": "M",
"middle": [],
"last": "Strube",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Muller",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "168--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Strube and C. Muller. 2003. A machine learning approach to pronoun resolution in spoken dialogue. In Proceedings of the 41st Annual Meeting of the Association for Com- putational Linguistics, pages 168-175, Japan.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Never look back: An alternative to centering",
"authors": [
{
"first": "M",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 17th Int. Conference on Computational Linguistics and 36th Annual Meeting of ACL",
"volume": "",
"issue": "",
"pages": "1251--1257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Strube. 1998. Never look back: An alterna- tive to centering. In Proceedings of the 17th Int. Conference on Computational Linguis- tics and 36th Annual Meeting of ACL, pages 1251-1257.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A corpus-based evaluation of centering and pronoun resolution",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "4",
"pages": "507--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Tetreault. 2001. A corpus-based eval- uation of centering and pronoun resolution. Computational Linguistics, 27(4):507-520.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A model-theoretic coreference scoring scheme",
"authors": [
{
"first": "M",
"middle": [],
"last": "Vilain",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Aberdeen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Connolly",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Sixth Message understanding Conference (MUC-6)",
"volume": "",
"issue": "",
"pages": "45--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proceedings of the Sixth Message understanding Conference (MUC-6), pages 45-52, San Francisco, CA. Morgan Kaufmann Publishers.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Coreference resolution using competition learning approach",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Yang, G. Zhou, J. Su, and C. Tan. 2003. Coreference resolution using competi- tion learning approach. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, Japan.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Recall = #positive instances classif ied correctly #positive instances Precision = #positive instances classif ied correctly #instances classif ied as positive",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Top portion of the decision tree learned on MUC-6 with the backward features",
"type_str": "figure"
},
"TABREF0": {
"num": null,
"content": "<table><tr><td>23. Apposition</td><td>1 if ante-of-candi and candi are in an appositive structure</td></tr><tr><td colspan=\"2\">Features describing the candidate (candi ):</td></tr><tr><td>24. candi NoAntecedent</td><td>1 if candi has no antecedent available; else 0</td></tr></table>",
"type_str": "table",
"text": "Features describing the antecedent of the candidate (ante-of-candi): 15. ante-candi DefNp 1 if ante-of-candi is a definite NP; else 0 16. ante-candi IndefNp 1 if ante-of-candi is an indefinite NP; else 0 17. ante-candi Pron 1 if ante-of-candi is a pronoun; else 0 18. ante-candi Proper 1 if ante-of-candi is a proper name; else 0 19. ante-candi NE Type 1 if ante-of-candi is an \"organization\" named-entity; 2 if \"person\", 3 if other types, 0 if not a NE 20. ante-candi Human the likelihood (0-100) that ante-of-candi is a human entity 21. ante-candi FirstNPInSent 1 if ante-of-candi is the first NP in the sentence where it occurs 22. ante-candi SubjNP 1 if ante-of-candi is the subject of the sentence where it occurs Features describing the relationships between the candidate (candi ) and ante-of-candi:",
"html": null
},
"TABREF1": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Backward features used to capture the coreferential information of a candidate antecedent for each candidate and accordingly generate the training and testing instances.",
"html": null
}
}
}
}