Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C10-1017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:55:56.081172Z"
},
"title": "End-to-End Coreference Resolution via Hypergraph Partitioning",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Cai",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Group Heidelberg Institute for Theoretical Studies gGmbH",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Group Heidelberg Institute for Theoretical Studies gGmbH",
"institution": "",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe a novel approach to coreference resolution which implements a global decision via hypergraph partitioning. In constrast to almost all previous approaches, we do not rely on separate classification and clustering steps, but perform coreference resolution globally in one step. Our hypergraph-based global model implemented within an endto-end coreference resolution system outperforms two strong baselines (Soon et al., 2001; Bengtson & Roth, 2008) using system mentions only.",
"pdf_parse": {
"paper_id": "C10-1017",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe a novel approach to coreference resolution which implements a global decision via hypergraph partitioning. In constrast to almost all previous approaches, we do not rely on separate classification and clustering steps, but perform coreference resolution globally in one step. Our hypergraph-based global model implemented within an endto-end coreference resolution system outperforms two strong baselines (Soon et al., 2001; Bengtson & Roth, 2008) using system mentions only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Coreference resolution is the task of grouping mentions of entities into sets so that all mentions in one set refer to the same entity. Most recent approaches to coreference resolution divide this task into two steps: (1) a classification step which determines whether a pair of mentions is coreferent or which outputs a confidence value, and (2) a clustering step which groups mentions into entities based on the output of step 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The classification steps of most approaches vary in the choice of the classifier (e.g. decision tree classifiers (Soon et al., 2001) , maximum entropy classification (Luo et al., 2004) , SVM classifiers (Rahman & Ng, 2009) ) and the number of features used (Soon et al. (2001) employ a set of twelve simple but effective features while e.g., Ng & Cardie (2002) and Bengtson & Roth (2008) devise much richer feature sets).",
"cite_spans": [
{
"start": 113,
"end": 132,
"text": "(Soon et al., 2001)",
"ref_id": "BIBREF21"
},
{
"start": 166,
"end": 184,
"text": "(Luo et al., 2004)",
"ref_id": "BIBREF13"
},
{
"start": 203,
"end": 222,
"text": "(Rahman & Ng, 2009)",
"ref_id": "BIBREF19"
},
{
"start": 257,
"end": 276,
"text": "(Soon et al. (2001)",
"ref_id": "BIBREF21"
},
{
"start": 342,
"end": 360,
"text": "Ng & Cardie (2002)",
"ref_id": "BIBREF17"
},
{
"start": 365,
"end": 387,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The clustering step exhibits much more variation: Local variants utilize a closest-first decision (Soon et al., 2001) , where a mention is resolved to its closest possible antecedent, or a best-first decision (Ng & Cardie, 2002) , where a mention is resolved to its most confident antecedent (based on the confidence value returned by step 1). Global variants attempt to consider all possible clustering possibilites by creating and searching a Bell tree (Luo et al., 2004) , by learning the optimal search strategy itself (Daum\u00e9 III & Marcu, 2005) , by building a graph representation and applying graph clustering techniques (Nicolae & Nicolae, 2006) , or by employing integer linear programming (Klenner, 2007; Denis & Baldridge, 2009) . Since these methods base their global clustering step on a local pairwise model, some global information which could have guided step 2 is already lost. The twin-candidate model (Yang et al., 2008) replaces the pairwise model by learning preferences between two antecedent candidates in step 1 and applies tournament schemes instead of the clustering in step 2.",
"cite_spans": [
{
"start": 98,
"end": 117,
"text": "(Soon et al., 2001)",
"ref_id": "BIBREF21"
},
{
"start": 209,
"end": 228,
"text": "(Ng & Cardie, 2002)",
"ref_id": "BIBREF17"
},
{
"start": 455,
"end": 473,
"text": "(Luo et al., 2004)",
"ref_id": "BIBREF13"
},
{
"start": 523,
"end": 548,
"text": "(Daum\u00e9 III & Marcu, 2005)",
"ref_id": "BIBREF9"
},
{
"start": 627,
"end": 652,
"text": "(Nicolae & Nicolae, 2006)",
"ref_id": "BIBREF18"
},
{
"start": 698,
"end": 713,
"text": "(Klenner, 2007;",
"ref_id": "BIBREF11"
},
{
"start": 714,
"end": 738,
"text": "Denis & Baldridge, 2009)",
"ref_id": "BIBREF10"
},
{
"start": 919,
"end": 938,
"text": "(Yang et al., 2008)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is little work which deviates from this two-step scheme. Culotta et al. (2007) introduce a first-order probabilistic model which implements features over sets of mentions and thus operates directly on entities.",
"cite_spans": [
{
"start": 63,
"end": 84,
"text": "Culotta et al. (2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we describe a novel approach to coreference resolution which avoids the division into two steps and instead performs a global decision in one step. We represent a document as a hypergraph, where the vertices denote mentions and the edges denote relational features between mentions. Coreference resolution is performed globally in one step by partitioning the hypergraph into subhypergraphs so that all mentions in one subhypergraph refer to the same entity. Our model out-performs two strong baselines, Soon et al. (2001) and Bengtson & Roth (2008) . Soon et al. (2001) developed an end-to-end coreference resolution system for the MUC data, i.e., a system which processes raw documents as input and produces annotated ones as output. However, with the advent of the ACE data, many systems either evaluated only true mentions, i.e. mentions which are included in the annotation, the so-called key, or even received true information for mention boundaries, heads of mentions and mention type (Culotta et al., 2007, inter alia) . While these papers report impressive results it has been concluded that this experimental setup simplifies the task and leads to an unrealistic surrogate for the coreference resolution problem (Stoyanov et al., 2009, p.657, p660) . We argue that the field should move towards a realistic setting using system mentions, i.e. automatically determined mention boundaries and types. In this paper we report results using our end-to-end coreference resolution system, COPA, without relying on unrealistic assumptions. Soon et al. (2001) transform the coreference resolution problem straightforwardly into a pairwise classification task making it accessible to standard machine learning classifiers. They use a set of twelve powerful features. Their system is based solely on information of the mention pair anaphor and antecedent. It does not take any information of other mentions into account. However, it turned out that it is difficult to improve upon their results just by applying a more sophisticated learning method and without improving the features. We use a reimplementation of their system as first baseline. Bengtson & Roth (2008) push this approach to the limit by devising a much more informative feature set. They report the best results to date on the ACE 2004 data using true mentions. We use their system combined with our preprocessing components as second baseline. Luo et al. (2004) perform the clustering step within a Bell tree representation. Hence their system theoretically has access to all possible outcomes making it a potentially global system. However, the classification step is still based on a pairwise model. Also since the search space in the Bell tree is too large they have to apply search heuristics. Hence, their approach loses much of the power of a truly global approach. Culotta et al. (2007) introduce a first-order probabilistic model which implements features over sets of mentions. They use four features for their first-order model. The first is an enumeration over pairs of noun phrases. The second is the output of a pairwise model. The third is the cluster size. The fourth counts mention type, number and gender in each cluster. Still, their model is based mostly on information about pairs of mentions. They assume true mentions as input. It is not clear whether the improvement in results translates to system mentions. Nicolae & Nicolae (2006) describe a graphbased approach which superficially resembles our approach. However, they still implement a two step coreference resolution approach and apply the global graph-based model only to step 2. They report considerable improvements over state-ofthe-art systems including Luo et al. (2004) . However, since they not only change the clustering strategy but also the features for step 1, it is not clear whether the improvements are due to the graph-based clustering technique. We, instead, describe a graph-based approach which performs classification and clustering in one step. We compare our approach with two competitive systems using the same feature sets.",
"cite_spans": [
{
"start": 518,
"end": 536,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF21"
},
{
"start": 541,
"end": 563,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
},
{
"start": 566,
"end": 584,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF21"
},
{
"start": 1006,
"end": 1040,
"text": "(Culotta et al., 2007, inter alia)",
"ref_id": null
},
{
"start": 1236,
"end": 1272,
"text": "(Stoyanov et al., 2009, p.657, p660)",
"ref_id": null
},
{
"start": 1556,
"end": 1574,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF21"
},
{
"start": 2159,
"end": 2181,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
},
{
"start": 2425,
"end": 2442,
"text": "Luo et al. (2004)",
"ref_id": "BIBREF13"
},
{
"start": 2853,
"end": 2874,
"text": "Culotta et al. (2007)",
"ref_id": "BIBREF8"
},
{
"start": 3413,
"end": 3437,
"text": "Nicolae & Nicolae (2006)",
"ref_id": "BIBREF18"
},
{
"start": 3718,
"end": 3735,
"text": "Luo et al. (2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The COPA system consists of learning modules which learn hyperedge weights from the training data, and resolution modules which create a hypergraph representation for the testing data and perform partitioning to produce subhypergraphs, each of which represents an entity. An example analysis of a short document involving the two entities, BARACK OBAMA and NICOLAS SARKOZY illustrates how COPA works. On this initial representation, a spectral clustering technique is applied to find two partitions which have the strongest within-cluster connections and the weakest between-clusters relations. The cut found is called Normalized Cut, which avoids trivial partitions frequently output by the min-cut algorithm. The two output subhypergraphs (Figure (1b)) correspond to two resolved entities shown on both sides of the bold dashed line. In real cases, recursive cutting is applied to all the subhypergraphs resulting from previous steps, until a stopping criterion is reached. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COPA: Coreference Partitioner",
"sec_num": "3"
},
{
"text": "COPA needs training data only for computing the hyperedge weights. Hyperedges represent features. Each hyperedge corresponds to a feature instance modeling a simple relation between two or more mentions. This leads to initially overlapping sets of mentions. Hyperedges are assigned weights which are calculated based on the training data as the percentage of the initial edges (as illustrated in Figure (1a)) being in fact coreferent. The weights for some of Soon et al. (2001) 's features learned from the ACE 2004 training data are given in Table 1 ",
"cite_spans": [
{
"start": 459,
"end": 477,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 543,
"end": 550,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "HyperEdgeLearner",
"sec_num": "3.1"
},
{
"text": "Unlike pairwise models, COPA processes a document globally in one step, taking care of the preference information among all the mentions at the same time and clustering them into sets directly. A raw document is represented as a single hypergraph with multiple edges. The hypergraph resolver partitions the simple hypergraph into several subhypergraphs, each corresponding to one set of coreferent mentions (see e.g. Figure ( 1b) which contains two subhypergraphs).",
"cite_spans": [],
"ref_spans": [
{
"start": 417,
"end": 425,
"text": "Figure (",
"ref_id": null
}
],
"eq_spans": [],
"section": "Coreference Resolution Modules",
"sec_num": "3.2"
},
{
"text": "A single document is represented in a hypergraph with basic relational features. Each hyperedge in a graph corresponds to an instance of one of those features with the weight assigned by the HyperEdgeLearner. Instead of connecting nodes with the target relation as usually done in graph models, COPA builds the graph directly out of a set of low dimensional features without any assumptions for a distance metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HGModelBuilder",
"sec_num": "3.2.1"
},
{
"text": "In order to partition the hypergraph we adopt a spectral clustering algorithm. Spectral clustering techniques use information obtained from the eigenvalues and eigenvectors of the graph Laplacian to cluster the vertices. They are simple to implement and reasonably fast and have been shown to frequently outperform traditional clustering algorithms such as k-means. These techniques have Output the current subHG end if output: partitioned HG many applications, e.g. image segmentation (Shi & Malik, 2000) .",
"cite_spans": [
{
"start": 486,
"end": 505,
"text": "(Shi & Malik, 2000)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HGResolver",
"sec_num": "3.2.2"
},
{
"text": "Algorithm 1 R2 partitioner Note: { L = I \u2212 Dv \u2212 1 2 HW De \u22121 H T Dv \u2212 1 2 } Note: { N cut(S) := vol\u2202S( 1 volS + 1 volS c )} input: target hypergraph HG, predefined \u03b1 \u22c6 Given",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HGResolver",
"sec_num": "3.2.2"
},
{
"text": "We adopt two variants of spectral clustering, recursive 2-way partitioning (R2 partitioner) and flat-K partitioning. Since flat-K partitioning did not perform as well we focus here on recursive 2way partitioning. In contrast to flat-K partitioning, this method does not need any information about the number of target sets. Instead a stopping criterion \u03b1 \u22c6 has to be provided. \u03b1 \u22c6 is adjusted on development data (see Algorithm 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HGResolver",
"sec_num": "3.2.2"
},
{
"text": "In order to apply spectral clustering to hypergraphs we follow Agarwal et al. (2005) . All experimental results are obtained using symmetric Laplacians (L sym ) (von Luxburg, 2007) .",
"cite_spans": [
{
"start": 63,
"end": 84,
"text": "Agarwal et al. (2005)",
"ref_id": "BIBREF0"
},
{
"start": 161,
"end": 180,
"text": "(von Luxburg, 2007)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HGResolver",
"sec_num": "3.2.2"
},
{
"text": "Given a hypergraph HG, a set of matrices is generated. D v and D e denote the diagonal matrices containing the vertex and hyperedge degrees respectively. |V | \u00d7 |E| matrix H represents the HG with the entries h(v, e) = 1 if v \u2208 e and 0 otherwise. H T is the transpose of H. W is the diagonal matrix with the edge weights. S is one of the subhypergraphs generated from a cut in the HG, where N cut(S) is the cut's value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HGResolver",
"sec_num": "3.2.2"
},
{
"text": "Using Normalized Cut does not generate singleton clusters, hence a heuristic singleton detection strategy is used in COPA. We apply a threshold \u03b2 to each node in the graph. If a node's degree is below the threshold, the node will be removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HGResolver",
"sec_num": "3.2.2"
},
{
"text": "Since edge weights are assigned using simple descriptive statistics, the time HGResolver needs for building the graph Laplacian matrix is insubstantial. For eigensolving, we use an open source library provided by the Colt project 1 which implements a Householder-QL algorithm to solve the eigenvalue decomposition. When applied to the symmetric graph Laplacian, the complexity of the eigensolving is given by O(n 3 ), where n is the number of mentions in a hypergraph. Since there are only a few hundred mentions per document in our data, this complexity is not an issue (spectral clustering gets problematic when applied to millions of data points).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity of HGResolver",
"sec_num": "3.3"
},
{
"text": "The HGModelBuilder allows hyperedges with a degree higher than two to grow throughout the building process. This type of edge is mergeable. Edges with a degree of two describe pairwise relations. Thus these edges are non-mergeable. This way any kind of relational features can be incorporated into the hypergraph model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "Features are represented as types of hyperedges (in Figure (1b) the two hyperedges marked by \"-\u2022\u2022\" are of the same type). Any realized edge is an instance of the corresponding edge type. All instances derived from the same type have the same weight, but they may get reweighted by the distance feature (Section 4.4).",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 63,
"text": "Figure (1b)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "In the following Subsections we describe the features used in our experiments. We use the entire set for obtaining the final results. We restrict ourselves to Soon et al. (2001) 's features when we compare our system with theirs in order to assess the impact of our model regardless of features (we use features 1., 2., 3., 6., 7., 11., 13.).",
"cite_spans": [
{
"start": 159,
"end": 177,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "High degree edges are the particular property of the hypergraph which allows to include all types of relational features into our model. The edges are built through pairwise relations and, if consistent, get incrementally merged into larger edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperedges With a Degree > 2",
"sec_num": "4.1"
},
{
"text": "High degree edges are not sensitive to positional information from the documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperedges With a Degree > 2",
"sec_num": "4.1"
},
{
"text": "(1) StrMatch Npron & (2) StrMatch Pron: After discarding stop words, if the strings of mentions completely match and are not pronouns, they are put into edges of the StrMatch Npron type. When the matched mentions are pronouns, they are put into the StrMatch Pron type edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperedges With a Degree > 2",
"sec_num": "4.1"
},
{
"text": "(3) Alias: After discarding stop words, if mentions are aliases of each other (i.e. proper names with partial match, full names and acronyms of organizations, etc.), they are put into edges of the Alias type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperedges With a Degree > 2",
"sec_num": "4.1"
},
{
"text": "(4) Synonym: If, according to WordNet, mentions are synonymous, they are put into an edge of the Synonym type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperedges With a Degree > 2",
"sec_num": "4.1"
},
{
"text": "(5) AllSpeak: Mentions which appear within a window of two words of a verb meaning to say form an edge of the AllSpeak type. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperedges With a Degree > 2",
"sec_num": "4.1"
},
{
"text": "Features which have been used by pairwise models are easily integrated into the hypergraph model by generating edges with only two vertices. Information sensitive to relative distance is represented by pairwise edges. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperedges With a Degree = 2",
"sec_num": "4.2"
},
{
"text": "In our model (11) mention type can only reasonably be used when it is conjoined with other features, since mention type itself describes an attribute of single mentions. In COPA, it is conjoined with other features to form hyperedges, e.g. the StrMatch Pron edge. We use the same strategy to represent (12) entity type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MentionType and EntityType",
"sec_num": "4.3"
},
{
"text": "Our hypergraph model does not have any obvious means to encode distance information. However, the distance between two mentions plays an important role in coreference resolution, especially for resolving pronouns. We do not encode distance as feature, because this would introduce many two-degree-hyperedges which would be computationally very expensive without much gain in performance. Instead, we use distance to reweight two-degree-hyperedges, which are sensitive to positional information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Weights",
"sec_num": "4.4"
},
{
"text": "We experimented with two types of distance weights: One is (13) sentence distance as used in Soon et al. (2001) 's feature set, while the other is (14) compatible mentions distance as introduced by Bengtson & Roth (2008) .",
"cite_spans": [
{
"start": 93,
"end": 111,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF21"
},
{
"start": 198,
"end": 220,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Weights",
"sec_num": "4.4"
},
{
"text": "We compare COPA's performance with two implementations of pairwise models. The first baseline is the BART (Versley et al., 2008) reimplementation of Soon et al. (2001) , with few but effective features. Our second baseline is Bengtson & Roth (2008) , which exploits a much larger feature set while keeping the machine learning approach simple. Bengtson & Roth (2008) show that their system outperforms much more sophisticated machine learning approaches such as Culotta et al. (2007) , who reported the best results on true mentions before Bengtson & Roth (2008) . Hence, Bengtson & Roth (2008) seems to be a reasonable competitor for evaluating COPA.",
"cite_spans": [
{
"start": 106,
"end": 128,
"text": "(Versley et al., 2008)",
"ref_id": "BIBREF23"
},
{
"start": 149,
"end": 167,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF21"
},
{
"start": 226,
"end": 248,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
},
{
"start": 344,
"end": 366,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
},
{
"start": 462,
"end": 483,
"text": "Culotta et al. (2007)",
"ref_id": "BIBREF8"
},
{
"start": 540,
"end": 562,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
},
{
"start": 572,
"end": 594,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In order to report realistic results, we neither assume true mentions as input nor do we evaluate only on true mentions. Instead, we use an inhouse mention tagger for automatically extracting mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We use the MUC6 data (Chinchor & Sundheim, 2003) with standard training/testing divisions (30/30) as well as the MUC7 data (Chinchor, 2001 ) (30/20). Since we do not have access to the official ACE testing data (only available to ACE participants), we follow Bengtson & Roth (2008) for dividing the ACE 2004 English training data (Mitchell et al., 2004) into training, development and testing partitions (268/76/107). We randomly split the 252 ACE 2003 training documents (Mitchell et al., 2003) using the same proportions into training, development and testing (151/38/63). The systems were tuned on development and run only once on testing data.",
"cite_spans": [
{
"start": 21,
"end": 48,
"text": "(Chinchor & Sundheim, 2003)",
"ref_id": "BIBREF7"
},
{
"start": 123,
"end": 138,
"text": "(Chinchor, 2001",
"ref_id": "BIBREF6"
},
{
"start": 259,
"end": 281,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
},
{
"start": 330,
"end": 353,
"text": "(Mitchell et al., 2004)",
"ref_id": "BIBREF14"
},
{
"start": 472,
"end": 495,
"text": "(Mitchell et al., 2003)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We implement a classification-based mention tagger, which tags each NP chunk as ACE mention or not, with neccessary post-processing for embedded mentions. For the ACE 2004 testing data, we cover 75.8% of the heads with 73.5% accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Tagger",
"sec_num": "5.2"
},
{
"text": "We evaluate COPA with three coreference resolution evaluation metrics: the B 3 -algorithm (Bagga & Baldwin, 1998) , the CEAF-algorithm (Luo, 2005) , and, for the sake of completeness, the MUC-score (Vilain et al., 1995) .",
"cite_spans": [
{
"start": 90,
"end": 113,
"text": "(Bagga & Baldwin, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 135,
"end": 146,
"text": "(Luo, 2005)",
"ref_id": "BIBREF12"
},
{
"start": 198,
"end": 219,
"text": "(Vilain et al., 1995)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.3"
},
{
"text": "Since the MUC-score does not evaluate singleton entities, it only partially evaluates the performance for ACE data, which includes singleton entities in the keys. The B 3 -algorithm (Bagga & Baldwin, 1998) addresses this problem of the MUC-score by conducting calculations based on mentions instead of coreference relations. However, another problematic issue emerges when system mentions have to be dealt with: B 3 assumes the mentions in the key and in the response to be identical, which is unlikely when a mention tagger is used to create system mentions. The CEAF-algorithm aligns entities in key and response by means of a similarity metric, which is motivated by B 3 's shortcoming of using one entity multiple times (Luo, 2005) . However, although CEAF theoretically does not require to have the same number of mentions in key and response, the algorithm still cannot be directly applied to end-to-end coreference resolution systems, because the similarity metric is influenced by the number of mentions in key and response.",
"cite_spans": [
{
"start": 182,
"end": 205,
"text": "(Bagga & Baldwin, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 724,
"end": 735,
"text": "(Luo, 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.3"
},
{
"text": "Hence, both the B 3 -and CEAF-algorithms have to be extended to deal with system mentions which are not in the key and true mentions not extracted by the system, so called twinless mentions (Stoyanov et al., 2009) . Two variants of the B 3 -algorithm are proposed by Stoyanov et al. (2009) , B 3 all and B 3 0 . B 3 all tries to assign intuitive precision and recall to the twinless system mentions and twinless key mentions, while keeping the size of the system mention set and the key mention set unchanged (which are different from each other). For twinless mentions, B 3 all discards twinless key mentions for precision and twinless system mentions for recall. Discarding parts of the key mentions, however, makes the fair comparison of precision values difficult. B 3 0 produces counter-intuitive precision by discarding all twinless system mentions. Although it penalizes the recall of all twinless key mentions, so that the Fscores are balanced, it is still too lenient (for further analyses see Cai & Strube (2010) ).",
"cite_spans": [
{
"start": 190,
"end": 213,
"text": "(Stoyanov et al., 2009)",
"ref_id": "BIBREF22"
},
{
"start": 267,
"end": 289,
"text": "Stoyanov et al. (2009)",
"ref_id": "BIBREF22"
},
{
"start": 1003,
"end": 1022,
"text": "Cai & Strube (2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.3"
},
{
"text": "We devise two variants of the B 3 -and CEAFalgorithms, namely B 3 sys and CEAF sys . For computing precision, the algorithms put all twinless true mentions into the response even if they were not extracted. All twinless system mentions which were deemed not coreferent are discarded. Only twinless system mentions which were mistakenly resolved are put into the key. Hence, the system is penalized for resolving mentions not found in the key. For recall the algorithms only consider mentions from the original key by discarding all the twinless system mentions and putting twinless true mentions into the response as singletons (algorithm details, simulations and comparison of different systems and metrics are provided in Cai & Strube (2010) ). For CEAF sys , \u03c6 3 (Luo, 2005) is used. B 3 sys and CEAF sys report results for endto-end coreference resolution systems adequately.",
"cite_spans": [
{
"start": 724,
"end": 743,
"text": "Cai & Strube (2010)",
"ref_id": "BIBREF5"
},
{
"start": 766,
"end": 777,
"text": "(Luo, 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.3"
},
{
"text": "We compare COPA's performance with two baselines: SOON -the BART (Versley et al., 2008) reimplementation of Soon et al. (2001) 20082 . All systems share BART's preprocessing components and our in-house ACE mention tagger.",
"cite_spans": [
{
"start": 65,
"end": 87,
"text": "(Versley et al., 2008)",
"ref_id": "BIBREF23"
},
{
"start": 108,
"end": 126,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.4"
},
{
"text": "In Table 2 we report the performance of SOON and B&R on the ACE 2004 testing data using the BART preprocessing components and our inhouse ACE mention tagger. For evaluation we use B 3 sys only, since Bengtson & Roth (2008) 's system does not allow to easily integrate CEAF.",
"cite_spans": [
{
"start": 200,
"end": 222,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.4"
},
{
"text": "B&R considerably outperforms SOON (we cannot compute statistical significance, because we do not have access to results for single documents in B&R). The difference, however, is not as big as we expected. Bengtson & Roth (2008) reported very good results when using true mentions. For evaluating on system mentions, however, they were using a too lenient variant of B 3 (Stoyanov et al., 2009) which discards all twinless mentions. When replacing this with B 3 sys the difference between SOON and B&R shrinks.",
"cite_spans": [
{
"start": 205,
"end": 227,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
},
{
"start": 370,
"end": 393,
"text": "(Stoyanov et al., 2009)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.4"
},
{
"text": "In both comparisons, COPA uses the same features as the corresponding baseline system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.5"
},
{
"text": "2 http://l2r.cs.uiuc.edu/\u02dccogcomp/ asoftware.php?skey=FLBJCOREF 5.5.1 COPA vs. SOON In Table 3 we compare the SOON-baseline with COPA using the R2 partitioner (parameters \u03b1 \u22c6 and \u03b2 optimized on development data). Even though COPA and SOON use the same features, COPA consistently outperforms SOON on all data sets using all evaluation metrics. With the exception of the MUC7, the ACE 2003 and the ACE 2004 data evaluated with CEAF sys , all of COPA's improvements are statistically significant. When evaluated using MUC and B 3 sys , COPA with the R2 partitioner boosts recall in all datasets while losing in precision. This shows that global hypergraph partitioning models the coreference resolution task more adequately than Soon et al. (2001) 's local model -even when using the very same features. Table 4 we compare the B&R system (using our preprocessing components and mention tagger), and COPA with the R2 partitioner using B&R features. COPA does not use the learned features from B&R, as this would have implied to embed a pairwise coreference resolution system in COPA. We report results for ACE 2003 and ACE 2004. The parameters are optimized on the ACE 2004 data. COPA with the R2 partitioner outperforms B&R on both datasets (we cannot compute statistical significance, because we do not have access to results for single documents in B&R). Bengtson & Roth (2008) developed their system on ACE 2004 data and never exposed it to ACE 2003 data. We suspect that the relatively poor result of B&R on ACE 2003 data is caused by overfitting to ACE B&R COPA with R2 partitioner R P F R P F B 3 sys ACE 2003 56.4 97.3 71.4 70.3 86.5 77.5 ACE 2004 75.6 Table 4 : B&R vs. COPA R2 (B&R features, system mentions)",
"cite_spans": [
{
"start": 727,
"end": 745,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF21"
},
{
"start": 1355,
"end": 1377,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
},
{
"start": 1605,
"end": 1613,
"text": "ACE 2003",
"ref_id": null
},
{
"start": 1614,
"end": 1652,
"text": "56.4 97.3 71.4 70.3 86.5 77.5 ACE 2004",
"ref_id": null
}
],
"ref_spans": [
{
"start": 87,
"end": 94,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 802,
"end": 809,
"text": "Table 4",
"ref_id": null
},
{
"start": 1658,
"end": 1665,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.5"
},
{
"text": "2004. Again, COPA gains in recall and loses in precision. This shows that COPA is a highly competetive system as it outperforms Bengtson & Roth (2008) 's system which has been claimed to have the best performance on the ACE 2004 data.",
"cite_spans": [
{
"start": 128,
"end": 150,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "COPA vs. B&R In",
"sec_num": "5.5.2"
},
{
"text": "On a machine with 2 AMD Opteron CPUs and 8 GB RAM, COPA finishes preprocessing, training and partitioning the ACE 2004 dataset in 15 minutes, which is slightly faster than our duplicated SOON baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Running Time",
"sec_num": "5.5.3"
},
{
"text": "Most previous attempts to solve the coreference resolution task globally have been hampered by employing a local pairwise model in the classification step (step 1) while only the clustering step realizes a global approach, e.g. Luo et al. (2004) , Nicolae & Nicolae (2006) , Klenner (2007) , Denis & Baldridge (2009) , lesser so Culotta et al. (2007) . It has been also observed that improvements in performance on true mentions do not necessarily translate into performance improvements on system mentions (Ng, 2008) . In this paper we describe a coreference resolution system, COPA, which implements a global decision in one step via hypergraph partitioning. COPA looks at the whole graph at once which enables it to outperform two strong baselines (Soon et al., 2001; Bengtson & Roth, 2008) . COPA's hypergraph-based strategy can be taken as a general preference model, where the preference for one mention depends on information on all other mentions.",
"cite_spans": [
{
"start": 228,
"end": 245,
"text": "Luo et al. (2004)",
"ref_id": "BIBREF13"
},
{
"start": 248,
"end": 272,
"text": "Nicolae & Nicolae (2006)",
"ref_id": "BIBREF18"
},
{
"start": 275,
"end": 289,
"text": "Klenner (2007)",
"ref_id": "BIBREF11"
},
{
"start": 292,
"end": 316,
"text": "Denis & Baldridge (2009)",
"ref_id": "BIBREF10"
},
{
"start": 329,
"end": 350,
"text": "Culotta et al. (2007)",
"ref_id": "BIBREF8"
},
{
"start": 507,
"end": 517,
"text": "(Ng, 2008)",
"ref_id": "BIBREF16"
},
{
"start": 751,
"end": 770,
"text": "(Soon et al., 2001;",
"ref_id": "BIBREF21"
},
{
"start": 771,
"end": 793,
"text": "Bengtson & Roth, 2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Outlook",
"sec_num": "6"
},
{
"text": "We follow Stoyanov et al. (2009) and argue that evaluating the performance of coreference resolution systems on true mentions is unrealistic. Hence we integrate an ACE mention tagger into our system, tune the system towards the real task, and evaluate only using system mentions. While Ng (2008) could not show that su-perior models achieved superior results on system mentions, COPA was able to outperform Bengtson & Roth (2008) 's system which has been claimed to achieve the best performance on the ACE 2004 data (using true mentions, Bengtson & Roth (2008) did not report any comparison with other systems using system mentions).",
"cite_spans": [
{
"start": 10,
"end": 32,
"text": "Stoyanov et al. (2009)",
"ref_id": "BIBREF22"
},
{
"start": 286,
"end": 295,
"text": "Ng (2008)",
"ref_id": "BIBREF16"
},
{
"start": 407,
"end": 429,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
},
{
"start": 538,
"end": 560,
"text": "Bengtson & Roth (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Outlook",
"sec_num": "6"
},
{
"text": "An error analysis revealed that there were some cluster-level inconsistencies in the COPA output. Enforcing this consistency would require a global strategy to propagate constraints, so that constraints can be included in the hypergraph partitioning properly. We are currently exploring constrained clustering, a field which has been very active recently (Basu et al., 2009) . Using constrained clustering methods may allow us to integrate negative information as constraints instead of combining several weak positive features to one which is still weak (e.g. our Agreement feature). For an application of constrained clustering to the related task of database record linkage, see Bhattacharya & Getoor (2009) .",
"cite_spans": [
{
"start": 355,
"end": 374,
"text": "(Basu et al., 2009)",
"ref_id": "BIBREF2"
},
{
"start": 682,
"end": 710,
"text": "Bhattacharya & Getoor (2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Outlook",
"sec_num": "6"
},
{
"text": "Graph models cannot deal well with positional information, such as distance between mentions or the sequential ordering of mentions in a document. We implemented distance as weights on hyperedges which resulted in decent performance. However, this is limited to pairwise relations and thus does not exploit the power of the high degree relations available in COPA. We expect further improvements, once we manage to include positional information directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Outlook",
"sec_num": "6"
},
{
"text": "http://acs.lbl.gov/\u02dchoschek/colt/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgements. This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a HITS PhD. scholarship. We would like to thank Byoung-Tak Zhang for bringing hypergraphs to our attention and\u00c8va M\u00fajdricza-Maydt for implementing the mention tagger. Finally we would like to thank our colleagues in the HITS NLP group for providing us with useful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Beyond pairwise clustering",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Jonwoo",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Lihi",
"middle": [],
"last": "Zelnik-Manor",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Kriegman & Serge",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Belongie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05)",
"volume": "2",
"issue": "",
"pages": "838--845",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agarwal, Sameer, Jonwoo Lim, Lihi Zelnik-Manor, Pietro Perona, David Kriegman & Serge Belongie (2005). Be- yond pairwise clustering. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), Vol. 2, pp. 838-845.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Algorithms for scoring coreference chains",
"authors": [
{
"first": "Amit & Breck",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 1st International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "563--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bagga, Amit & Breck Baldwin (1998). Algorithms for scor- ing coreference chains. In Proceedings of the 1st Inter- national Conference on Language Resources and Evalu- ation, Granada, Spain, 28-30 May 1998, pp. 563-566.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Constrained Clustering: Advances in Algorithms, Theory, and Applications",
"authors": [
{
"first": "Sugato",
"middle": [],
"last": "Basu",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Basu, Sugato, Ian Davidson & Kiri L. Wagstaff (Eds.) (2009). Constrained Clustering: Advances in Algorithms, Theory, and Applications. Boca Raton, Flo.: CRC Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Understanding the value of features for coreference resolution",
"authors": [
{
"first": "Eric & Dan",
"middle": [],
"last": "Bengtson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "294--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bengtson, Eric & Dan Roth (2008). Understanding the value of features for coreference resolution. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, Waikiki, Honolulu, Hawaii, 25-27 October 2008, pp. 294-303.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Collective relational clustering",
"authors": [
{
"first": "Indrajit & Lise",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2009,
"venue": "Constrained Clustering: Advances in Algorithms, Theory, and Applications",
"volume": "",
"issue": "",
"pages": "221--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhattacharya, Indrajit & Lise Getoor (2009). Collective re- lational clustering. In S. Basu, I. Davidson & K. Wagstaff (Eds.), Constrained Clustering: Advances in Algorithms, Theory, and Applications, pp. 221-244. Boca Raton, Flo.: CRC Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Evaluation metrics for end-to-end coreference resolution systems",
"authors": [
{
"first": "Jie & Michael",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the SIGdial 2010 Conference: The 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "24--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cai, Jie & Michael Strube (2010). Evaluation metrics for end-to-end coreference resolution systems. In Proceed- ings of the SIGdial 2010 Conference: The 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Tokyo, Japan, 24-25 September 2010. To ap- pear.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Message Understanding Conference (MUC) 7. LDC2001T02",
"authors": [
{
"first": "Nancy",
"middle": [],
"last": "Chinchor",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinchor, Nancy (2001). Message Understanding Confer- ence (MUC) 7. LDC2001T02, Philadelphia, Penn: Lin- guistic Data Consortium.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Message Understanding Conference (MUC) 6. LDC2003T13, Philadelphia",
"authors": [
{
"first": "Nancy & Beth",
"middle": [],
"last": "Chinchor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sundheim",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinchor, Nancy & Beth Sundheim (2003). Message Under- standing Conference (MUC) 6. LDC2003T13, Philadel- phia, Penn: Linguistic Data Consortium.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "First-order probabilistic models for coreference resolution",
"authors": [
{
"first": "Aron",
"middle": [],
"last": "Culotta",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wick & Andrew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Culotta, Aron, Michael Wick & Andrew McCallum (2007). First-order probabilistic models for coreference resolu- tion. In Proceedings of Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics, Rochester, N.Y., 22-27 April 2007, pp. 81-88.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A large-scale exploration of effective global features for a joint entity detection and tracking model",
"authors": [
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Hal & Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Human Language Technology Conference and the 2005 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "97--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daum\u00e9 III, Hal & Daniel Marcu (2005). A large-scale ex- ploration of effective global features for a joint entity de- tection and tracking model. In Proceedings of the Human Language Technology Conference and the 2005 Confer- ence on Empirical Methods in Natural Language Process- ing, Vancouver, B.C., Canada, 6-8 October 2005, pp. 97- 104.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Global joint models for coreference resolution and named entity classification",
"authors": [
{
"first": "Pascal & Jason",
"middle": [],
"last": "Denis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2009,
"venue": "Procesamiento del Lenguaje Natural",
"volume": "42",
"issue": "",
"pages": "87--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis, Pascal & Jason Baldridge (2009). Global joint models for coreference resolution and named entity classification. Procesamiento del Lenguaje Natural, 42:87-96.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Enforcing consistency on coreference sets",
"authors": [
{
"first": "Manfred",
"middle": [],
"last": "Klenner",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "323--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klenner, Manfred (2007). Enforcing consistency on coref- erence sets. In Proceedings of the International Confer- ence on Recent Advances in Natural Language Process- ing, Borovets, Bulgaria, 27-29 September 2007, pp. 323- 328.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "On coreference resolution performance metrics",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Human Language Technology Conference and the 2005 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luo, Xiaoqiang (2005). On coreference resolution perfor- mance metrics. In Proceedings of the Human Language Technology Conference and the 2005 Conference on Em- pirical Methods in Natural Language Processing, Van- couver, B.C., Canada, 6-8 October 2005, pp. 25-32.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A mentionsynchronous coreference resolution algorithm based on the Bell Tree",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "Hongyan",
"middle": [],
"last": "Jing",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "136--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luo, Xiaoqiang, Abe Ittycheriah, Hongyan Jing, Nanda Kambhatla & Salim Roukos (2004). A mention- synchronous coreference resolution algorithm based on the Bell Tree. In Proceedings of the 42nd Annual Meet- ing of the Association for Computational Linguistics, Barcelona, Spain, 21-26 July 2004, pp. 136-143.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Shudong Huang & Ramez Zakhary",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell, Alexis, Stephanie Strassel, Shudong Huang & Ramez Zakhary (2004). ACE 2004 Multilingual Training Corpus. LDC2005T09, Philadelphia, Penn.: Linguistic Data Consortium.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "TIDES Extraction (ACE) 2003 Multilingual Training Data. LDC2004T09",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Przybocki",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Doddington",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "Ada",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brunstain",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell, Alexis, Stephanie Strassel, Mark Przybocki, JK Davis, George Doddington, Ralph Grishman, Adam Meyers, Ada Brunstain, Lisa Ferro & Beth Sundheim (2003). TIDES Extraction (ACE) 2003 Multilingual Training Data. LDC2004T09, Philadelphia, Penn.: Lin- guistic Data Consortium.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Unsupervised models for coreference resolution",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "640--649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ng, Vincent (2008). Unsupervised models for corefer- ence resolution. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, Waikiki, Honolulu, Hawaii, 25-27 October 2008, pp. 640- 649.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Improving machine learning approaches to coreference resolution",
"authors": [
{
"first": "Vincent & Claire",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ng, Vincent & Claire Cardie (2002). Improving machine learning approaches to coreference resolution. In Pro- ceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, Penn., 7-12 July 2002, pp. 104-111.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BestCut: A graph algorithm for coreference resolution",
"authors": [
{
"first": "Cristina & Gabriel",
"middle": [],
"last": "Nicolae",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nicolae",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "275--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicolae, Cristina & Gabriel Nicolae (2006). BestCut: A graph algorithm for coreference resolution. In Proceed- ings of the 2006 Conference on Empirical Methods in Nat- ural Language Processing, Sydney, Australia, 22-23 July 2006, pp. 275-283.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Supervised models for coreference resolution",
"authors": [
{
"first": "Altaf & Vincent",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "968--977",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rahman, Altaf & Vincent Ng (2009). Supervised models for coreference resolution. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, Singapore, 6-7 August 2009, pp. 968-977.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Normalized cuts and image segmentation",
"authors": [
{
"first": "Jianbo & Jitendra",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Malik",
"suffix": ""
}
],
"year": 2000,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "22",
"issue": "8",
"pages": "888--905",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shi, Jianbo & Jitendra Malik (2000). Normalized cuts and image segmentation. IEEE Transactions on Pattern Anal- ysis and Machine Intelligence, 22(8):888-905.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [
{
"first": "Wee",
"middle": [],
"last": "Soon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Meng",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "4",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soon, Wee Meng, Hwee Tou Ng & Daniel Chung Yong Lim (2001). A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521-544.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Conundrums in noun phrase coreference resolution: Making sense of the state-of-the-art",
"authors": [
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie & Ellen Riloff",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "656--664",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stoyanov, Veselin, Nathan Gilbert, Claire Cardie & Ellen Riloff (2009). Conundrums in noun phrase coreference resolution: Making sense of the state-of-the-art. In Pro- ceedings of the Joint Conference of the 47th Annual Meet- ing of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Lan- guage Processing, Singapore, 2-7 August 2009, pp. 656- 664.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "BART: A modular toolkit for coreference resolution",
"authors": [
{
"first": "Yannick",
"middle": [],
"last": "Versley",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Eidelman",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Jern",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Xiaofeng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2008,
"venue": "Companion Volume to the Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, Columbus",
"volume": "",
"issue": "",
"pages": "9--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Versley, Yannick, Simone Paolo Ponzetto, Massimo Poesio, Vladimir Eidelman, Alan Jern, Jason Smith, Xiaofeng Yang & Alessandro Moschitti (2008). BART: A mod- ular toolkit for coreference resolution. In Companion Volume to the Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, Colum- bus, Ohio, 15-20 June 2008, pp. 9-12.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A model-theoretic coreference scoring scheme",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Vilain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Aberdeen",
"suffix": ""
},
{
"first": "Dennis",
"middle": [],
"last": "Connolly & Lynette Hirschman",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 6th Message Understanding Conference (MUC-6)",
"volume": "",
"issue": "",
"pages": "45--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vilain, Marc, John Burger, John Aberdeen, Dennis Connolly & Lynette Hirschman (1995). A model-theoretic corefer- ence scoring scheme. In Proceedings of the 6th Message Understanding Conference (MUC-6), pp. 45-52. San Ma- teo, Cal.: Morgan Kaufmann.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A tutorial on spectral clustering",
"authors": [
{
"first": "Ulrike",
"middle": [],
"last": "Von Luxburg",
"suffix": ""
}
],
"year": 2007,
"venue": "Statistics and Computing",
"volume": "17",
"issue": "4",
"pages": "395--416",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "von Luxburg, Ulrike (2007). A tutorial on spectral clustering. Statistics and Computing, 17(4):395-416.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A twincandidate model for learning-based anaphora resolution",
"authors": [
{
"first": "Xiaofeng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "&",
"middle": [],
"last": "Chew Lim Tan",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "3",
"pages": "327--356",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang, Xiaofeng, Jian Su & Chew Lim Tan (2008). A twin- candidate model for learning-based anaphora resolution. Computational Linguistics, 34(3):327-356.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Hypergraph-based representation"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Agreement: If mentions agree in Gender, Number and Semantic Class they are put in edges of the Agreement type. Because Gender, Number and Semantic Class are strong negative coreference indicators -in contrast to e.g. StrMatchand hence weak positive features, they are combined into the one feature Agreement."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Apposition & (8) RelativePronoun: If two mentions are in a appositive structure, they are put in an edge of type Apposition. If the latter mention is a relative pronoun, the mentions are put in an edge of type RelativePronoun.(9) HeadModMatch: If the syntactic heads of two mentions match, and if their modifiers do not contradict each other, the mentions are put in an edge of type HeadModMatch."
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "SubString: If a mention is the substring of another one, they are put into an edge of type SubString."
},
"TABREF0": {
"content": "<table><tr><td>A hypergraph (Figure (1a)) is built for this</td></tr><tr><td>document based on three features. Two hyper-</td></tr><tr><td>edges denote the feature partial string match,</td></tr><tr><td>{US President Barack Obama, Barack Obama, Obama} and {US President Barack Obama, Pres-ident Sarkozy}.</td></tr><tr><td>[US President Barack Obama] came to Toronto today.</td></tr><tr><td>[Obama] discussed the financial crisis with [President</td></tr><tr><td>Sarkozy].</td></tr><tr><td>[He] talked to him [him] about the recent downturn of the</td></tr><tr><td>European markets.</td></tr><tr><td>[Barack Obama] will leave Toronto tomorrow.</td></tr></table>",
"html": null,
"text": "One hyperedge denotes the feature pronoun match, {he, him}. Two hyperedges denote the feature all speak, {Obama, he} and {President Sarkozy, him}.",
"num": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td>calculate N cuti</td><td/></tr><tr><td>end for</td><td/></tr><tr><td>Choose the splitting point with min i</td><td>(N cuti)</td></tr><tr><td>Generate two subHGs if min i (N cuti) &lt; \u03b1 * then</td><td/></tr><tr><td>for each subHG do</td><td/></tr><tr><td colspan=\"2\">Bi-partition the subHG with the R2 partitioner</td></tr><tr><td>end for</td><td/></tr><tr><td>else</td><td/></tr></table>",
"html": null,
"text": "a HG, construct its Dv, H, W and De Compute L for HG Solve the L for the second smallest eigenvector V2 for each splitting point in V2 do",
"num": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td/><td/><td>SOON</td><td/><td/><td>B&amp;R</td></tr><tr><td/><td>R</td><td>P</td><td>F</td><td>R</td><td>P</td><td>F</td></tr><tr><td>B 3 sys</td><td colspan=\"6\">64.7 85.7 73.8 66.3 85.8 74.8</td></tr></table>",
"html": null,
"text": "SOON vs. COPA R2 (SOON features, system mentions, bold indicates significant improvement in F-score over SOON according to a paired-t test with p < 0.05)",
"num": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table><tr><td>B&amp;R -Bengtson &amp; Roth</td></tr></table>",
"html": null,
"text": "Baselines on ACE 2004",
"num": null,
"type_str": "table"
}
}
}
}