Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S13-1043",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:41:37.152341Z"
},
"title": "Automatically Identifying Implicit Arguments to Improve Argument Linking and Coherence Modeling",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg University",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg University",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Implicit arguments are a discourse-level phenomenon that has not been extensively studied in semantic processing. One reason for this lies in the scarce amount of annotated data sets available. We argue that more data of this kind would be helpful to improve existing approaches to linking implicit arguments in discourse and to enable more in-depth studies of the phenomenon itself. In this paper, we present a range of studies that empirically validate this claim. Our contributions are threefold: we present a heuristic approach to automatically identify implicit arguments and their antecedents by exploiting comparable texts; we show how the induced data can be used as training data for improving existing argument linking models; finally, we present a novel approach to modeling local coherence that extends previous approaches by taking into account non-explicit entity references.",
"pdf_parse": {
"paper_id": "S13-1043",
"_pdf_hash": "",
"abstract": [
{
"text": "Implicit arguments are a discourse-level phenomenon that has not been extensively studied in semantic processing. One reason for this lies in the scarce amount of annotated data sets available. We argue that more data of this kind would be helpful to improve existing approaches to linking implicit arguments in discourse and to enable more in-depth studies of the phenomenon itself. In this paper, we present a range of studies that empirically validate this claim. Our contributions are threefold: we present a heuristic approach to automatically identify implicit arguments and their antecedents by exploiting comparable texts; we show how the induced data can be used as training data for improving existing argument linking models; finally, we present a novel approach to modeling local coherence that extends previous approaches by taking into account non-explicit entity references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semantic role labeling systems traditionally process text in a sentence-by-sentence fashion, constructing local structures of semantic meaning . Information relevant to these structures, however, can be non-local in natural language texts (Palmer et al., 1986; Fillmore, 1986, inter alia) . In this paper, we view instances of this phenomenon, also referred to as implicit arguments, as elements of discourse. In a coherent discourse, each utterance focuses on a salient set of entities, also called \"foci\" (Sidner, 1979) or \"centers\" (Joshi and Kuhn, 1979) . According to the theory of Centering (Grosz et al., 1995) , the salience of an entity in a discourse is reflected by linguistic factors such as choice of referring expression and syntactic form. Both extremes of salience, i.e., contexts of referential continuity (Brown, 1983) and irrelevance, can also be reflected by the non-realization of an entity. Altough specific instances of non-realization, so-called zero anaphora, have been well-studied in discourse analysis (Sag and Hankamer, 1984; Tanenhaus and Carlson, 1990 , inter alia), this phenomenon has widely been ignored in computational approaches to entitybased coherence modeling. It could, however, provide an explanation for local coherence in cases that are not covered by current models of Centering (cf. Louis and Nenkova (2010) ). In this work, we propose a new model to predict whether realizing an argument contributes to local coherence in a given position in discourse. Example (1) shows a text fragment, in which argument realization is necessary in the first sentence but redundant in the second.",
"cite_spans": [
{
"start": 239,
"end": 260,
"text": "(Palmer et al., 1986;",
"ref_id": "BIBREF30"
},
{
"start": 261,
"end": 288,
"text": "Fillmore, 1986, inter alia)",
"ref_id": null
},
{
"start": 507,
"end": 521,
"text": "(Sidner, 1979)",
"ref_id": "BIBREF38"
},
{
"start": 546,
"end": 557,
"text": "Kuhn, 1979)",
"ref_id": "BIBREF18"
},
{
"start": 597,
"end": 617,
"text": "(Grosz et al., 1995)",
"ref_id": "BIBREF17"
},
{
"start": 823,
"end": 836,
"text": "(Brown, 1983)",
"ref_id": "BIBREF5"
},
{
"start": 1030,
"end": 1054,
"text": "(Sag and Hankamer, 1984;",
"ref_id": "BIBREF37"
},
{
"start": 1055,
"end": 1082,
"text": "Tanenhaus and Carlson, 1990",
"ref_id": "BIBREF40"
},
{
"start": 1329,
"end": 1353,
"text": "Louis and Nenkova (2010)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) El Salvador is now the only Latin American country which still has troops in [Iraq] . Nicaragua, Honduras and the Dominican Republic have withdrawn their troops [\u2205] .",
"cite_spans": [
{
"start": 81,
"end": 87,
"text": "[Iraq]",
"ref_id": null
},
{
"start": 165,
"end": 168,
"text": "[\u2205]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "From a semantic processing perspective, a human reader can easily infer that \"Iraq\", the marked entity in the first sentence of Example (1), is also an implicit argument of the predicate \"withdraw\" in the second sentence. This inference step is, however, difficult to model computationally as it involves an interplay of two challenging sub-tasks: first, a semantic processor has to determine that an argument is not realized (but inferrable); and second, a suit-able antecedent has to be found within the discourse context. For the remainder of this paper, we refer to these steps as identifying and linking implicit arguments to discourse antecedents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As indicated by Example (1), implicit arguments are an important aspect in semantic processing, yet they are not captured in traditional semantic role labeling systems. The main reasons for this are the scarcity of annotated data, and the inherent difficulty of inferring discourse antecedents automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose to induce implicit arguments and discourse antecedents by exploiting complementary (explicit) information obtained from monolingual comparable texts (Section 3). We apply the empirically acquired data in argument linking (Section 4) and coherence modeling (Section 5). We conclude with a discussion on the advantages of our data set and outline directions for future work (Section 6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The most prominent approach to entity-based coherence modeling nowadays is the entity grid model by Barzilay and Lapata (2005) . It has originally been proposed for automatic sentence ordering but has also been applied in coherence evaluation and readability assessment (Barzilay and Lapata, 2008; Pitler and Nenkova, 2008) , and story generation (McIntyre and Lapata, 2009) . Based on the original model, a few extensions have been proposed: for example, Filippova and Strube (2007) and Elsner and Charniak (2011b) suggested additional features to characterize semantic relatedness between entities and features specific to single entities, respectively. Other entity-based approaches to coherence modeling include the pronoun model by Charniak and Elsner (2009) and the discourse-new model by Elsner and Charniak (2008) . All of these approaches are, however, based on explicitly realized entity mentions only, ignoring references that are inferrable.",
"cite_spans": [
{
"start": 100,
"end": 126,
"text": "Barzilay and Lapata (2005)",
"ref_id": "BIBREF0"
},
{
"start": 270,
"end": 297,
"text": "(Barzilay and Lapata, 2008;",
"ref_id": "BIBREF1"
},
{
"start": 298,
"end": 323,
"text": "Pitler and Nenkova, 2008)",
"ref_id": "BIBREF33"
},
{
"start": 347,
"end": 374,
"text": "(McIntyre and Lapata, 2009)",
"ref_id": "BIBREF26"
},
{
"start": 456,
"end": 483,
"text": "Filippova and Strube (2007)",
"ref_id": "BIBREF12"
},
{
"start": 488,
"end": 515,
"text": "Elsner and Charniak (2011b)",
"ref_id": "BIBREF10"
},
{
"start": 737,
"end": 763,
"text": "Charniak and Elsner (2009)",
"ref_id": "BIBREF6"
},
{
"start": 795,
"end": 821,
"text": "Elsner and Charniak (2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The role of implicit arguments has been studied early on in the context of semantic processing (Fillmore, 1986; Palmer et al., 1986 ). Yet, the phenomenon has mostly been ignored in semantic role labeling. First data sets, focusing on implicit arguments, have only recently become available: Ruppenhofer et al. (2010) organized a SemEval shared task on \"linking events and participants in discourse\", Gerber and Chai (2012) made available implicit argument annotations for the NomBank corpus (Meyers et al., 2008) and Moor et al. (2013) provide annotations for parts of the OntoNotes corpus (Weischedel et al., 2011) . However, these resources are very limited: The annotations by Moor et al. and Gerber and Chai are restricted to 5 and 10 predicate types, respectively. The training set of the Se-mEval task contains only 245 resolved implicit arguments in total. As pointed out by Silberer and Frank (2012) , additional training data can be heuristically created by treating anaphoric mentions as implicit arguments. Their experimental results showed that artificial training data can indeed improve results, but only when obtained from corpora with manual semantic role annotations (on the sentence level) and gold coreference chains.",
"cite_spans": [
{
"start": 95,
"end": 111,
"text": "(Fillmore, 1986;",
"ref_id": "BIBREF13"
},
{
"start": 112,
"end": 131,
"text": "Palmer et al., 1986",
"ref_id": "BIBREF30"
},
{
"start": 492,
"end": 513,
"text": "(Meyers et al., 2008)",
"ref_id": "BIBREF28"
},
{
"start": 518,
"end": 536,
"text": "Moor et al. (2013)",
"ref_id": "BIBREF29"
},
{
"start": 591,
"end": 616,
"text": "(Weischedel et al., 2011)",
"ref_id": null
},
{
"start": 681,
"end": 707,
"text": "Moor et al. and Gerber and",
"ref_id": null
},
{
"start": 883,
"end": 908,
"text": "Silberer and Frank (2012)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The aim of this work is to automatically construct a data set of implicit arguments and their discourse antecedents. We propose an induction approach that exploits complementary information obtained from pairs of comparable texts. As a basis for this approach, we rely on several preparatory steps proposed in the literature that first identify information two documents have in common (cf. Figure 1 ). In particular, we align corresponding predicateargument structures (PAS) using graph-based clustering (Roth and Frank, 2012b) . We then determine co-referring entities across the texts using coreference resolution techniques on concatenated document pairs (Lee et al., 2012) . These preprocessing steps are described in more detail in Section 3.1. Given the preprocessed comparable texts and aligned PAS, we propose to heuristically identify implicit arguments and link them to their antecedents via the cross-document coreference chains. We describe the details of this approach in Section 3.2.",
"cite_spans": [
{
"start": 505,
"end": 528,
"text": "(Roth and Frank, 2012b)",
"ref_id": "BIBREF35"
},
{
"start": 659,
"end": 677,
"text": "(Lee et al., 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 391,
"end": 399,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Identifying and linking implicit arguments",
"sec_num": "3"
},
{
"text": "The starting point for our approach is the data set of automatically aligned predicate pairs that has been released by Roth and Frank (2012a ; we exploit alignments between corresponding predicates across texts (marked by solid lines) and co-referring entities (marked by dotted lines) to infer implicit arguments (marked by 'i') and link antecedents (curly dashed line) set, henceforth just R&F data, is a collection of 283,588 predicate pairs that have been aligned \"with high precision\" 2 across comparable newswire articles from the Gigaword corpus (Parker et al., 2011) . To use these documents for our argument induction technique, we apply a couple of pre-processing tools on each single document and perform crossdocument entity coreference on pairs of documents.",
"cite_spans": [
{
"start": 119,
"end": 140,
"text": "Roth and Frank (2012a",
"ref_id": "BIBREF34"
},
{
"start": 553,
"end": 574,
"text": "(Parker et al., 2011)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation",
"sec_num": "3.1"
},
{
"text": "Single document pre-processing. We apply several preprocessing steps to all documents in the R&F data: we use the Stanford CoreNLP package 3 for tokenization and sentence splitting. We then apply MATE tools (Bohnet, 2010; Bj\u00f6rkelund et al., 2010) , including the integrated PropBank/NomBank-style semantic parser, to reconstruct local predicate-argument structures for aligned predicates. Finally, we resolve pronouns that occur in a PAS using the coreference resolution system by Martschat et al. (2012) .",
"cite_spans": [
{
"start": 207,
"end": 221,
"text": "(Bohnet, 2010;",
"ref_id": "BIBREF4"
},
{
"start": 222,
"end": 246,
"text": "Bj\u00f6rkelund et al., 2010)",
"ref_id": "BIBREF3"
},
{
"start": 481,
"end": 504,
"text": "Martschat et al. (2012)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation",
"sec_num": "3.1"
},
{
"text": "Cross-document coreference. We apply crossdocument coreference resolution to induce antecedents for implicit arguments. In practice, we use the Stanford Coreference System (Lee et al., 2013) and run it on pairs of texts by simply providing a single document as input, comprising of a concatenation of the two texts. To perform this step with high precision, we only use the most precise resolution sieves: \"String Match\", \"Relaxed String Match\", \"Precise Constructs\", \"Strict Head Match [A-C]\", and \"Proper Head Noun Match\".",
"cite_spans": [
{
"start": 172,
"end": 190,
"text": "(Lee et al., 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation",
"sec_num": "3.1"
},
{
"text": "Given a pair of aligned predicates from two comparable texts, we examine the parser output to identify the arguments in each predicate-argument structure (PAS). We compare the set of realized argument positions in both structures to determine whether one PAS contains an argument position (explicit) that has not been realized in the other PAS (implicit). For each implicit argument, we identify appropriate antecedents by considering the cross-document coreference chain of its explicit counterpart. As our goal is to link arguments within discourse, we restrict candidate antecedents to mentions that occur in the same document as the implicit argument. We apply a number of restrictions to the resulting pairs of implicit arguments and antecedents to minimize the impact of errors from preprocessing:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identification and linking approach",
"sec_num": "3.2"
},
{
"text": "-The aligned PAS should consist of a different number of arguments (to minimize the impact of argument labeling errors) -The antecedent should not be a resolved pronoun (to avoid errors resulting from incorrect pronoun resolution) -The antecedent should not be in the same sentence as the implicit argument (to circumvent cases, in which an implicit argument is actually explicit but has not been recognized by the parser)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identification and linking approach",
"sec_num": "3.2"
},
{
"text": "We apply the identification and linking approach to the full R&F data set of aligned predicates. As a result, we induce a total of 701 implicit argument and antecedent pairs, each in a separate document, involving 535 different predicates. Examples are displayed in Table 1 . Note that 701 implicit arguments from 283,588 pairs of predicate-argument structures seem to represent a fairly low recall. Most predicate pairs in the high precision data set of Roth and Frank (2012a) do, however, consist of identical argument positions (84.5%). In the remaining cases, in which an implicit argument can be identified (15.5%), an antecedent in discourse cannot always be found using the high precision coreference sieves. This does not mean that implicit arguments are a rare phenomenon in general. In fact, 38.9% of all manually aligned predicate pairs in Roth and Frank (2012a) involved a different number of arguments. We manually evaluated a subset of 90 induced implicit arguments and found 80 discourse antecedents to be correct (89%). Some incorrectly linked instances still result from preprocessing errors. In Table 2, we present a range of different error types that occurred when extracting implicit arguments without any restrictions.",
"cite_spans": [
{
"start": 455,
"end": 477,
"text": "Roth and Frank (2012a)",
"ref_id": "BIBREF34"
},
{
"start": 851,
"end": 873,
"text": "Roth and Frank (2012a)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 266,
"end": 273,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Resulting data set",
"sec_num": "3.3"
},
{
"text": "Our first experiment assesses the utility of automatically induced implicit arguments and antecedent pairs for the task of implicit argument linking. For evaluation, we use the data sets from the SemEval 2010 task on Linking Events and their Participants in Discourse (Ruppenhofer et al., 2010, henceforth just SemEval) . For direct comparison with previous results and heuristic acquisition techniques (cf. Section 2), we apply the implicit argument identification and linking model by Silberer and Frank (2012, henceforth S&F) for training and testing.",
"cite_spans": [
{
"start": 268,
"end": 319,
"text": "(Ruppenhofer et al., 2010, henceforth just SemEval)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: Linking implicit arguments",
"sec_num": "4"
},
{
"text": "Both the training and test sets of the SemEval task are text corpora extracted from Sherlock Holmes novels, with manual frame semantic annotations including implicit arguments. In the actual linking task (\"NI-only\"), labels are provided for local arguments and participating systems have to perform the following three sub-tasks: (1) identify implicit arguments (IA), (2) predict whether each IA is resolvable and, if so, (3) find an appropriate antecedent. The task organizers provide two versions of their data sets: one based on FrameNet annotations and one based on PropBank/NomBank annotations. We found that the latter, however, only contains a subset of the implicit argument annotations from the FrameNet-based version. As all previous results in this task have been reported on the FrameNet data set, we adopt the same setting. Note that our additional training data is automatically labeled with a PropBank/NomBank-style parser. That is, we need to map our annotations to FrameNet. The organizers of the SemEval shared task provide a manual mapping dictionary for predicates in the annotated data set. We make use of this manual mapping and additionally use SemLink 1.1 4 for mapping predicates and arguments not in the dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task summary",
"sec_num": "4.1"
},
{
"text": "We make use of the system by S&F to train a new model for the NI-only task. As mentioned in the previous sub-section, this task consists of three steps: In step (1), implicit arguments are identified as unfilled FrameNet core roles that are not competing with roles that are already filled; in step (2), a SVM classifier is used to predict whether implicit arguments are resolvable based on a small amount of features -semantic type of the affected Frame Element, the relative frequency of its realization type in the SemEval training corpus, and a boolean feature that indicates whether the affected sentence is in passive voice and does not contain a (deep) subject. In step (3), we apply the same features and classifier as S&F, i.e., the BayesNet implementation from Weka (Witten and Frank, 2005) , to find appropriate antecedents for (predicted) resolvable arguments. S&F report that their best results were obtained when considering all entities as candidate antecedents that are syntactic constituents from the present and the past two sentences, or entities that occurred at least five times in the previous discourse (\"Chains+Win\" setting). In their evaluation, the latter of these two restrictions crucially depended on gold coreference chains. As the automatic coreference chains in our Table 2 : Examples of erroneous pairs of implicit arguments and antecedents. In (1), the parser did not recognize \"Statistics\" as an argument of showed; in (2), the parser mislabeled \"French\" as a locative modifier; both errors lead to incorrectly identified implicit arguments. In 3, the implicit argument is correct but the wrong antecedent was identified because \"major\" had been mislabeled in the aligned predicate-argument structure data are rather sparse (and noisy), we only consider syntactic constituents from the present and the past two sentences as antecedents (\"SentWin\" setting).",
"cite_spans": [
{
"start": 776,
"end": 800,
"text": "(Witten and Frank, 2005)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [
{
"start": 1298,
"end": 1305,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model details",
"sec_num": "4.2"
},
{
"text": "Before training and testing a new model with our own data, we perform feature selection using 10-fold cross validation. We run the feature selection on a combination of the SemEval training data and our additional data set in order to find a set of features that generalizes best across the two different corpora. We found these to be features regarding \"prominence\", selectional preferences (\"sp supersense\"), the POS tags of entity mentions, and semantic types of argument positions (\"semType dni.entity\"). Note that the S&F system does not make use of any lexicalized information. Instead, semantic features are computed based on the highest abstraction level in WordNet (Fellbaum, 1998) . For detailed description of all features, see Silberer and Frank (2012) .",
"cite_spans": [
{
"start": 674,
"end": 690,
"text": "(Fellbaum, 1998)",
"ref_id": null
},
{
"start": 739,
"end": 764,
"text": "Silberer and Frank (2012)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model details",
"sec_num": "4.2"
},
{
"text": "For direct comparison in the full task, both with S&F's model and other previously published results, we adopt the precision, recall and F 1 measures as defined in Ruppenhofer et al. (2010) . We compare our results with those previously reported on the Se-mEval task (see Table 3 for a summary): Chen et al. (2010) adapted SEMAFOR, the best performing system that participated in the actual task in 2010. Tonelli and Delmonte (2011) presented a revised version of their SemEval system (Tonelli and Delmonte, 2010) , which outperformed SEMAFOR in terms of recall (6%) and F 1 score (8%). The best results in terms of recall and F 1 score up to date have been reported by Laparra and Rigau (2012) , with 25% and 19%, respectively. Our model outperforms their state-of-the-art system in terms of precision (21%) but at a higher cost of recall (8% influencing factors for their high recall are probably (1) their improved method for identifying (resolvable) implicit arguments, and (2) their addition of lexicalized and ontological features.",
"cite_spans": [
{
"start": 164,
"end": 189,
"text": "Ruppenhofer et al. (2010)",
"ref_id": "BIBREF36"
},
{
"start": 296,
"end": 314,
"text": "Chen et al. (2010)",
"ref_id": "BIBREF7"
},
{
"start": 485,
"end": 513,
"text": "(Tonelli and Delmonte, 2010)",
"ref_id": "BIBREF41"
},
{
"start": 670,
"end": 694,
"text": "Laparra and Rigau (2012)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 272,
"end": 279,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Comparison to the original results reported by S&F, whose system we use, shows that our additional data improves precision (from 6% to 21%) and F 1 score (from 7% to 12%). The loss in recall is marginal (-1%) given the size of the test set (259 resolvable cases in total). The result in precision is the second highest score reported on this task. Interestingly, the improvements are higher than those of the best training set used in the original study by Silberer and Frank (2012), even though their additional data set is three times bigger than ours and is based on manual semantic annotations. We conjecture that their low gain in precision could be a side effect triggered by two factors: on the one hand, their model crucially relies on coreference chains, which are automatically generated for the test set and hence are rather noisy. On the other hand, their heuristically created training data might not represent implicit argument instances adequately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In our second experiment, we examine the effect of implicit arguments on local coherence, i.e., the question of how well a local argument (non-)realization fits into a given context. We approach this question as follows: first, we assemble a data set of document pairs that differ only with respect to a single realization decision (Section 5.1). Given each pair in this data set, we ask human annotators to indicate their preference for the implicit or explicit argument realization in the pre-specified context (Section 5.2). Second, we attempt to emulate the decision process computationally using a discriminative model based on discourse and entity-specific features (Section 5.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: Implicit arguments in coherence modeling",
"sec_num": "5"
},
{
"text": "We use the induced data set (henceforth source data), as described in Section 3, as a starting point for composing a set of document pairs that involve implicit and explicit arguments. To make sure that each document pair in this data set only differs with respect to a single realization decision, we first create two copies of each document from the source data: one copy remains in its original form, and the other copy will be modified with respect to a single argument realization. Example (2) illustrates an example of an original and modified (marked by an asterik) sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data compilation",
"sec_num": "5.1"
},
{
"text": "( Note that adding and removing arguments at random can lead to structures that are semantically implausible. Hence, we restrict this procedure to predicate-argument structures (PAS) that actually occur and are aligned across two texts, and create modifications by replacing a single argument position in one text with the corresponding argument position in the comparable text. Examples (2) and 3show two such comparable texts. The original PAS in Example (2) contains an explicit argument that is implicit in the aligned PAS and hence removed in the modified version. Vice versa, the original text in (3) involves an implicit argument, which is made explicit in the modified version.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data compilation",
"sec_num": "5.1"
},
{
"text": "( 3) We ensure that the modified structure fits into the given context grammatically by only considering PAS with identical predicate form and constituent order. We found that this restriction constrains affected arguments to be modifiers, prepositional phrases and direct objects. We argue that this is actually a desirable property because more complicated alternations could affect coherence by themselves; resulting interplays would make it difficult to distinguish between the isolated effect of argument realization itself and other effects, triggered for example by sentence order (Gordon et al., 1993) .",
"cite_spans": [
{
"start": 588,
"end": 609,
"text": "(Gordon et al., 1993)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data compilation",
"sec_num": "5.1"
},
{
"text": "We set up a web experiment using the NLTK package (Belz and Kow, 2011) to collect (local) coherence ratings for implicit and explicit arguments. For this experiment, we compiled a data set of 150 document pairs. As described in Section 5.1, each text pair consists of mostly the same text, with the only difference being one argument realization.",
"cite_spans": [
{
"start": 50,
"end": 70,
"text": "(Belz and Kow, 2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "5.2"
},
{
"text": "We presented all 150 pairs to two annotators 7 and asked them to indicate their preference for one alternative over the other using a continuous slider scale. The annotators got to see the full texts, with the alternatives presented next to each other. To make texts easier to read and differences easier to spot, we collapsed all identical sentences into one column and highlighted the aligned predicate (in both texts) and the affected argument (in the explicit case). An example is shown in Figure 2 . To avoid any bias in the annotation process, we shuffled the sequence of text pairs and randomly assigned the side of display (left/right) of each realization type (explicit/implicit). Note that instead of providing a definition of local coherence ourselves, we simply asked the annotators to rate how \"natural\" a realization sounds given the discourse context.",
"cite_spans": [],
"ref_spans": [
{
"start": 494,
"end": 502,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Annotation",
"sec_num": "5.2"
},
{
"text": "We found that annotators made use of the full rating scale, which spans from -50 to +50, with the extremes indicating either a strong preference for the text on the left hand side or the right hand side, respectively. Most ratings are, however, concentrated more towards the center of the scale (i.e., around zero). This seems to imply that the use of implicit or explicit arguments did not make a considerable difference most of the time. The first author confirmed this assumption and resolved disagreements between annotators in several group discussions. The annotators also affirmed that some cases do not read naturally when a specific argument is or is not realized at a given position in discourse. Examples (4) and (5) illustrate two cases, in which a redundant argument is realized (A4, or destination) or a coherence establishing argument has been omitted (A2, or co-signer). Following discussions with the annotators, we discarded all items from the final data set, for which no clear preference could be established (72%) or the annotators had different preferences (9%). We mapped all remaining items into two classes according to whether the affected argument had to be implicit (9 texts) or explicit (20 texts). All 29 uniquely classified texts are used as a small gold standard test set for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "5.2"
},
{
"text": "We model the decision process that underlies the (non-)realization of arguments using a SVM classifier and a range of discourse features. The features can be classified into three groups: features specific to the affected predicate-argument structure (Parg), the (automatic) coreference chain of the affected argument (Coref), and the discourse context (Disc).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence model",
"sec_num": "5.3"
},
{
"text": "Parg includes the absolute and relative number of realized arguments; the number of modifiers in the PAS; and the total length (in words) of the PAS and the complete sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence model",
"sec_num": "5.3"
},
{
"text": "Coref includes the number of previous/follow-up mentions in a fixed sentence window; the distance (in number of words/sentences) to the previous/next mention; the distribution of occurrences over the previous/succeeding two sentences; 9 and the POS of previous/follow-up mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence model",
"sec_num": "5.3"
},
{
"text": "Disc includes the total number of coreference chains in the text; the occurrence of pronouns in the current sentence; lexical repetitions in the previous/follow-up sentence; the current position in discourse (begin, middle, end); and a feature indicating whether the affected argument occured in the first sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence model",
"sec_num": "5.3"
},
{
"text": "Note that most of these features overlap with those successfully applied in previous work. For example, Pitler and Nenkova (2008) also use text length, sentence-to-sentence transitions, word overlap and pronoun occurrences as features for predicting readability. Our own contribution lies in the definition of PAS-specific features and the adaptation of all features to the task of predicting (non-)realization of arguments in a predicate-argument structure.",
"cite_spans": [
{
"start": 104,
"end": 129,
"text": "Pitler and Nenkova (2008)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence model",
"sec_num": "5.3"
},
{
"text": "We do not make use of any manually annotated data for training. Instead, our model relies solely on the automatically induced source data, described in Section 3, for learning. We prepare this data set as follows: first, we remove all data points that also occur in the test set. Second, we split all pairs of texts into two groups -texts that contain a predicate-argument structure in which an implicit argument has been identified (IA), and their comparable counterparts that contain the aligned PAS with an explicit argument (EA). All texts are labelled according to their group. For all texts in group EA, we remove the explicit argument from the aligned PAS. This way, the feature extractor always gets to see the text and automatic annotations as if the realization decision had not been performed and can thus extract unbiased feature values for the affected entity and argument position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": "5.4"
},
{
"text": "The goal of this task is to correctly predict the realization type (implicit or explicit) of an argument that maximizes the coherence of the document. As a proxy for coherence, we use the naturalness ratings given by our annotators. We evaluate classification performance on the part of our test set for which clear preferences have been established. We report results in terms of precision, recall and F 1 score. We compute precision as the fraction of correct classifier decisions divided by the total number of classifications; and recall as the fraction of correct classifier decisions divided by the total number of test items. Note that precision and recall are identical when the model provides a class label for every test item. We compute F 1 as the harmonic mean between precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation setting",
"sec_num": "5.5"
},
{
"text": "For comparison with previous work, we further apply a couple of previously proposed local coherence models: the original entity grid model by Barzilay and Lapata (2005) , a modified version that uses topic models (Elsner and Charniak, 2011a) and an extended version that includes entity-specific features (Elsner and Charniak, 2011b) . We further apply the discourse-new model by Elsner and Charniak (2008) and the pronoun-based model by Charniak and Elsner (2009) . For all of the aforementioned models, we use their respective implementation provided with the Brown Coherence Toolkit 10 . Note that the toolkit only returns one coherence score for each document. To use the toolkit for argument classification, we use two documents per data pointone that contains the affected argument explicitly and one that does not (implicit argument) -and treat the higher scoring variant as classification output. If both documents achieve the same score, we neither count the test item as correctly nor as incorrectly classified. In contrast, we apply our own model only on the document that contains the implicit argument, and use the classifier to predict whether this realization type fits into the given context or not. Note that our model has an advantage here because it is specifically designed for this task. Yet, all models compute local coherence ratings based on entity occurrences and should thus be able to predict which realization type coheres best with the given discourse context. 11",
"cite_spans": [
{
"start": 142,
"end": 168,
"text": "Barzilay and Lapata (2005)",
"ref_id": "BIBREF0"
},
{
"start": 213,
"end": 241,
"text": "(Elsner and Charniak, 2011a)",
"ref_id": "BIBREF9"
},
{
"start": 305,
"end": 333,
"text": "(Elsner and Charniak, 2011b)",
"ref_id": "BIBREF10"
},
{
"start": 380,
"end": 406,
"text": "Elsner and Charniak (2008)",
"ref_id": "BIBREF8"
},
{
"start": 438,
"end": 464,
"text": "Charniak and Elsner (2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation setting",
"sec_num": "5.5"
},
{
"text": "The results are summarized in Table 4 . As all models provided class labels for almost all test instances, we focus our discussion on F 1 scores. The majority class in our test set is the explicit realization type, making up 20 of the 29 test items (69%).",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.6"
},
{
"text": "The original entity grid model produced differing scores for the two realization types only in 26 cases. The model exhibits a strong preference for the implicit realization type: it predicts this class in 22 cases, resulting in an F 1 score of only 15%. Taking a closer look at the features of the model reveals that this an expected outcome: in its original setting, the entity grid learns realization patterns in the form of sentence-to-sentence transitions. Most entities are, however, only mentioned a few times in a text, which means that non-realizations make up the 'most probable' class -independently of whether they are relevant in a given context or not. The models by Charniak and Elsner (2009) and Elsner and Charniak (2011a) , which are not based on an entity grid, do not suffer from this effect and achieve better results, with F 1 scores of 38% and 48%, respectively. The topical and entity-specific refinements to the entity grid model also alleviate the bias towards non-realizations, resulting in improved F 1 scores of 18% and 34%, respectively. To counter-balance this issue altogether, we train a simplified version of our own model that only uses features that involve occurrence patterns. The main difference between this simplified model and the original entity grid model lies in the different use of training data: while entity grid models treat all non-realized items equally, our model gets to \"see\" actual examples of entities that are implicit. In other words, our simplified model takes into account implicit mentions of entities, not only explicit ones. The results show that this extra information has a significant (p<0.01, using a randomization test (Yeh, 2000) ) impact on test set performance, basically raising F 1 from 15% to 83%. Using all features of our model further increases F 1 score to 90%, the highest score achieved overall.",
"cite_spans": [
{
"start": 680,
"end": 706,
"text": "Charniak and Elsner (2009)",
"ref_id": "BIBREF6"
},
{
"start": 711,
"end": 738,
"text": "Elsner and Charniak (2011a)",
"ref_id": "BIBREF9"
},
{
"start": 1687,
"end": 1698,
"text": "(Yeh, 2000)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.6"
},
{
"text": "The highest weighted features in our model include all three feature groups: for example, the number of coreferent mentions within the preceeding/following two sentences (Coref), the number of words already realized in the affected predicateargument structure (Parg), and the total number of coreference chains in the document (Disc).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.6"
},
{
"text": "In this paper, we presented a novel approach to accurately induce implicit arguments and discourse antecedents from comparable texts (cf. Section 3). We demonstrated the benefit of this kind of data for linking implicit arguments and modeling local coherence. Our experiments revealed three particularly interesting results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Firstly, a small data set of (automatically induced) implicit arguments can have a greater impact on argument linking models than a bigger data set of artificially created instances (cf. Section 4). Secondly, the use of implicit vs. explicit arguments, while being a subtle difference in most contexts, can have a clear impact on text ratings. Thirdly, our automatically created training data enables models to learn features that considerably improve prediction of locally coherent argument realizations (cf. Section 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "For the task of implicit argument linking, more training data will be needed to further advance the state-of-the-art.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Our method for inducing this kind of data, by exploiting aligned predicateargument structures from comparable texts, has shown promising results. Future work will have to explore this direction more fully, for example, by identifying ways to induce data with higher recall. Integrating argument (non-)realization into a full model of local coherence also remains part of future work. In this paper, we presented a suitable basis for such work: a training set that contains empirical data on implicit arguments in discourse; and a feature set that models argument realization with high accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "cf. http://www.cl.uni-heidelberg.de/%7Emroth/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The used method achieved a precision of 86.2% at a recall of 29.1% on the Roth and Frank (2012a) test set.3 http://nlp.stanford.edu/software/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://verbs.colorado.edu/semlink/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Results as reported inTonelli and Delmonte (2011) 6 Results computed as an average over the scores given for both test files; rounded towards the number given for the test file that contained more instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Both annotators are undergraduate students in Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that both examples are only excerpts from the affected texts. The annotators got to see the full context. a hammer,\" Lamari told reporters after signing the agreement of intent[\u2205].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This type of feature is very similar to the transition patterns in the original entity grid. The only difference is that our features are not typed with respect to the grammatical function of explicit realizations. The reason for skipping this information lies in the insignificant amount of relevant samples in our (noisy) training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "cf. http://www.ling.ohio-state.edu/%7Emelsner/ 11 Recall that input document pairs are identical except for the affected argument position. Consequently, the resulting coherence scores only differ with respect to affected entity realizations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to the Landesgraduiertenf\u00f6rderung Baden-W\u00fcrttemberg for funding within the research initiative \"Coherence in language processing\" at Heidelberg University. We thank our annotators and four anonymous reviewers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Modeling local coherence: An entity-based approach",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "141--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Mirella Lapata. 2005. Modeling local coherence: An entity-based approach. In Pro- ceedings of the 43rd Annual Meeting of the Associa- tion for Computational Linguistics, Ann Arbor, Michi- gan, USA, 25-30 June 2005, pages 141-148.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modeling local coherence: An entity-based approach",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "1",
"pages": "1--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computa- tional Linguistics, 34(1):1-34.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Discrete vs. continuous rating scales for language evaluation in nlp",
"authors": [
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Kow",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "230--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anja Belz and Eric Kow. 2011. Discrete vs. continuous rating scales for language evaluation in nlp. In Pro- ceedings of the 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 230-235, Portland, Oregon, USA, June.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A high-performance syntactic and semantic dependency parser",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Love",
"middle": [],
"last": "Hafdell",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Nugues",
"suffix": ""
}
],
"year": 2010,
"venue": "Coling 2010: Demonstration Volume",
"volume": "",
"issue": "",
"pages": "33--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders Bj\u00f6rkelund, Bernd Bohnet, Love Hafdell, and Pierre Nugues. 2010. A high-performance syntac- tic and semantic dependency parser. In Coling 2010: Demonstration Volume, pages 33-36, Beijing, China, August. Coling 2010 Organizing Committee.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Top accuracy and fast dependency parsing is not a contradiction",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "89--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Lin- guistics (Coling 2010), pages 89-97, Beijing, China, August.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Topic continuity in written english narrative",
"authors": [
{
"first": "Cheryl",
"middle": [],
"last": "Brown",
"suffix": ""
}
],
"year": 1983,
"venue": "Topic Continuity in Discourse: A Quantitative Cross-Language Study. John Benjamins",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheryl Brown. 1983. Topic continuity in written english narrative. In Talmy Givon, editor, Topic Continuity in Discourse: A Quantitative Cross-Language Study. John Benjamins, Amsterdam, The Netherlands.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "EM works for pronoun anaphora resolution",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)",
"volume": "",
"issue": "",
"pages": "148--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Micha Elsner. 2009. EM works for pronoun anaphora resolution. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 148-156, Athens, Greece, March.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SEMAFOR: Frame argument resolution with log-linear models",
"authors": [
{
"first": "Desai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "264--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Desai Chen, Nathan Schneider, Dipanjan Das, and Noah A. Smith. 2010. SEMAFOR: Frame argument resolution with log-linear models. In Proceedings of the 5th International Workshop on Semantic Evalua- tion, pages 264-267, Uppsala, Sweden, July.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Coreferenceinspired coherence modeling",
"authors": [
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT, Short Papers",
"volume": "",
"issue": "",
"pages": "41--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micha Elsner and Eugene Charniak. 2008. Coreference- inspired coherence modeling. In Proceedings of ACL- 08: HLT, Short Papers, pages 41-44, Columbus, Ohio, June.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Disentangling chat with local coherence models",
"authors": [
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1179--1189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micha Elsner and Eugene Charniak. 2011a. Disentan- gling chat with local coherence models. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Tech- nologies, pages 1179-1189, Portland, Oregon, USA, June.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Extending the entity grid with entity-specific features",
"authors": [
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "125--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micha Elsner and Eugene Charniak. 2011b. Extending the entity grid with entity-specific features. In Pro- ceedings of the 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 125-129, Portland, Oregon, USA, June.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "WordNet: An Electronic Lexical Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. MIT Press, Cambridge, Mas- sachusetts, USA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Extending the entity-grid coherence model to semantically related entities",
"authors": [
{
"first": "Katja",
"middle": [],
"last": "Filippova",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 11th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "139--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katja Filippova and Michael Strube. 2007. Extending the entity-grid coherence model to semantically re- lated entities. In Proceedings of the 11th European Workshop on Natural Language Generation, Schloss Dagstuhl, Germany, 17-20 June 2007, pages 139-142.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Pragmatically controlled zero anaphora",
"authors": [
{
"first": "C",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of the twelfth annual meeting of the Berkeley Linguistics Society",
"volume": "",
"issue": "",
"pages": "95--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. J. Fillmore. 1986. Pragmatically controlled zero anaphora. In Proceedings of the twelfth annual meet- ing of the Berkeley Linguistics Society, pages 95-107.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semantic Role Labeling of Implicit Arguments for Nominal Predicates",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Gerber",
"suffix": ""
},
{
"first": "Joyce",
"middle": [],
"last": "Chai",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics",
"volume": "38",
"issue": "4",
"pages": "755--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Gerber and Joyce Chai. 2012. Semantic Role Labeling of Implicit Arguments for Nominal Predi- cates. Computational Linguistics, 38(4):755-798.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Pronouns, names, and the centering of attention in discourse",
"authors": [
{
"first": "C",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Barbara",
"middle": [
"J"
],
"last": "Gordon",
"suffix": ""
},
{
"first": "Laura",
"middle": [
"A"
],
"last": "Grosz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gilliom",
"suffix": ""
}
],
"year": 1993,
"venue": "Cognitive Science",
"volume": "17",
"issue": "",
"pages": "311--347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter C. Gordon, Barbara J. Grosz, and Laura A. Gilliom. 1993. Pronouns, names, and the centering of attention in discourse. Cognitive Science, 17:311-347.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Towards weakly supervised resolution of null instantiations",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Gorinski",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Sporleder",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -Long Papers",
"volume": "",
"issue": "",
"pages": "119--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Gorinski, Josef Ruppenhofer, and Caroline Sporleder. 2013. Towards weakly supervised resolu- tion of null instantiations. In Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -Long Papers, pages 119-130, Potsdam, Germany, March.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Centering: A framework for modeling the local coherence of discourse",
"authors": [
{
"first": "Barbara",
"middle": [
"J"
],
"last": "Grosz",
"suffix": ""
},
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Weinstein",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "21",
"issue": "2",
"pages": "203--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the lo- cal coherence of discourse. Computational Linguis- tics, 21(2):203-225.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Centered logic: The role of entity centered sentence representation in natural language inferencing",
"authors": [
{
"first": "K",
"middle": [],
"last": "Aravind",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 1979,
"venue": "Proceedings of the 6th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "435--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aravind K. Joshi and Steve Kuhn. 1979. Centered logic: The role of entity centered sentence representation in natural language inferencing. In Proceedings of the 6th International Joint Conference on Artificial Intel- ligence, Tokyo, Japan, August, pages 435-439.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Exploiting explicit annotations and semantic types for implicit argument resolution",
"authors": [
{
"first": "Egoitz",
"middle": [],
"last": "Laparra",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Sixth IEEE International Conference on Semantic Computing (ICSC 2010)",
"volume": "",
"issue": "",
"pages": "75--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Egoitz Laparra and German Rigau. 2012. Exploiting ex- plicit annotations and semantic types for implicit argu- ment resolution. In Proceedings of the Sixth IEEE In- ternational Conference on Semantic Computing (ICSC 2010), pages 75-78, Palermo, Italy, September. IEEE Computer Society.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Sources of evidence for implicit argument resolution",
"authors": [
{
"first": "Egoitz",
"middle": [],
"last": "Laparra",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -Long Papers",
"volume": "",
"issue": "",
"pages": "155--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Egoitz Laparra and German Rigau. 2013. Sources of ev- idence for implicit argument resolution. In Proceed- ings of the 10th International Conference on Compu- tational Semantics (IWCS 2013) -Long Papers, pages 155-166, Potsdam, Germany, March.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Joint entity and event coreference resolution across documents",
"authors": [
{
"first": "Heeyoung",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Angel",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "489--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heeyoung Lee, Marta Recasens, Angel Chang, Mihai Surdeanu, and Dan Jurafsky. 2012. Joint entity and event coreference resolution across documents. In Proceedings of the 2012 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning, pages 489-500, Jeju Island, Korea, July.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Deterministic coreference resolution based on entitycentric, precision-ranked rules",
"authors": [],
"year": null,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deterministic coreference resolution based on entity- centric, precision-ranked rules. Computational Lin- guistics, 39(4). Accepted for publication.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Creating local coherence: An empirical assessment",
"authors": [
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "313--316",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annie Louis and Ani Nenkova. 2010. Creating local coherence: An empirical assessment. In Human Lan- guage Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 313-316, Los An- geles, California, June.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A multigraph model for coreference resolution",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Martschat",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Broscheit",
"suffix": ""
},
{
"first": "\u00c9va",
"middle": [],
"last": "M\u00fajdricza-Maydt",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2012,
"venue": "Joint Conference on EMNLP and CoNLL -Shared Task",
"volume": "",
"issue": "",
"pages": "100--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Martschat, Jie Cai, Samuel Broscheit,\u00c9va M\u00fajdricza-Maydt, and Michael Strube. 2012. A multigraph model for coreference resolution. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 100-106, Jeju Island, Korea, July.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning to tell tales: A data-driven approach to story generation",
"authors": [
{
"first": "Neil",
"middle": [],
"last": "Mcintyre",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neil McIntyre and Mirella Lapata. 2009. Learning to tell tales: A data-driven approach to story generation. In Proceedings of the Joint Conference of the 47th",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing",
"authors": [],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "217--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing, Singapore, 2-7 Au- gust 2009, pages 217-225.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "NomBank v1.0. Linguistic Data Consortium",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Ruth",
"middle": [],
"last": "Reeves",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Macleod",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Meyers, Ruth Reeves, and Catherine Macleod. 2008. NomBank v1.0. Linguistic Data Consortium, Philadelphia.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Predicate-specific annotations for implicit role binding: Corpus annotation, data analysis and evaluation experiments",
"authors": [
{
"first": "Tatjana",
"middle": [],
"last": "Moor",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -Short Papers",
"volume": "",
"issue": "",
"pages": "369--375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tatjana Moor, Michael Roth, and Anette Frank. 2013. Predicate-specific annotations for implicit role bind- ing: Corpus annotation, data analysis and evaluation experiments. In Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -Short Papers, pages 369-375, Potsdam, Germany, March.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Recovering implicit information",
"authors": [
{
"first": "Martha",
"middle": [
"S"
],
"last": "Palmer",
"suffix": ""
},
{
"first": "Deborah",
"middle": [
"A"
],
"last": "Dahl",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [
"J"
],
"last": "Schiffman",
"suffix": ""
},
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "Marcia",
"middle": [],
"last": "Linebarger",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Dowding",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of the 24th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "10--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha S. Palmer, Deborah A. Dahl, Rebecca J. Schiff- man, Lynette Hirschman, Marcia Linebarger, and John Dowding. 1986. Recovering implicit information. In Proceedings of the 24th Annual Meeting of the Associ- ation for Computational Linguistics, New York, N.Y., 10-13 June 1986, pages 10-19.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Synthesis Lectures on Human Language Technologies",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Daniel Gildea, and Nianwen Xue. 2010. Synthesis Lectures on Human Language Technolo- gies. Morgan & Claypool.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "English Gigaword Fifth Edition. Linguistic Data Consortium",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Parker",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Jumbo",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Parker, David Graff, Jumbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English Gigaword Fifth Edi- tion. Linguistic Data Consortium, Philadelphia.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Revisiting readability: A unified framework for predicting text quality",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "186--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler and Ani Nenkova. 2008. Revisiting read- ability: A unified framework for predicting text qual- ity. In Proceedings of the 2008 Conference on Empir- ical Methods in Natural Language Processing, pages 186-195, Honolulu, Hawaii, October.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Aligning predicate argument structures in monolingual comparable texts: A new corpus for a new task",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "218--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Roth and Anette Frank. 2012a. Aligning pred- icate argument structures in monolingual comparable texts: A new corpus for a new task. In Proceedings of the First Joint Conference on Lexical and Computa- tional Semantics, pages 218-227, Montreal, Canada, June.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Aligning predicates across monolingual comparable texts using graph-based clustering",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "171--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Roth and Anette Frank. 2012b. Aligning predicates across monolingual comparable texts us- ing graph-based clustering. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natu- ral Language Processing and Computational Natural Language Learning, pages 171-182, Jeju Island, Ko- rea, July.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "SemEval-2010 Task 10: Linking Events and Their Participants in Discourse",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Sporleder",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josef Ruppenhofer, Caroline Sporleder, Roser Morante, Collin Baker, and Martha Palmer. 2010. SemEval- 2010 Task 10: Linking Events and Their Participants in Discourse. In Proceedings of the 5th International Workshop on Semantic Evaluations, pages 45-50, Up- psala, Sweden, July.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Towards a Theory of Anaphoric Processing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ivan",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Sag",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hankamer",
"suffix": ""
}
],
"year": 1984,
"venue": "Linguistics and Philosophy",
"volume": "7",
"issue": "",
"pages": "325--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan A. Sag and Jorge Hankamer. 1984. Towards a The- ory of Anaphoric Processing. Linguistics and Philos- ophy, 7:325-345.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Towards a computational theory of definite anaphora comprehension in English",
"authors": [
{
"first": "L",
"middle": [],
"last": "Candace",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sidner",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Candace L. Sidner. 1979. Towards a computational the- ory of definite anaphora comprehension in English. Technical Report AI-Memo 537, Massachusetts Insti- tute of Technology, AI Lab, Cambridge, Mass.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Casting implicit role linking as an anaphora resolution task",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM 2012)",
"volume": "",
"issue": "",
"pages": "7--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer and Anette Frank. 2012. Casting implicit role linking as an anaphora resolution task. In Pro- ceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM 2012), pages 1-10, Montr\u00e9al, Canada, 7-8 June.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Comprehension of Deep and Surface Verbphrase Anaphors",
"authors": [
{
"first": "K",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"N"
],
"last": "Tanenhaus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carlson",
"suffix": ""
}
],
"year": 1990,
"venue": "Language and Cognitive Processes",
"volume": "5",
"issue": "4",
"pages": "257--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael K. Tanenhaus and Greg N. Carlson. 1990. Com- prehension of Deep and Surface Verbphrase Anaphors. Language and Cognitive Processes, 5(4):257-280.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "VENSES++: Adapting a deep semantic processing system to the identification of null instantiations",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Tonelli",
"suffix": ""
},
{
"first": "Rodolfo",
"middle": [],
"last": "Delmonte",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "296--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Tonelli and Rodolfo Delmonte. 2010. VENSES++: Adapting a deep semantic processing system to the identification of null instantiations. In Proceedings of the 5th International Workshop on Semantic Evalua- tion, pages 296-299, Uppsala, Sweden, July.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Desperately seeking implicit arguments in text",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Tonelli",
"suffix": ""
},
{
"first": "Rodolfo",
"middle": [],
"last": "Delmonte",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the ACL 2011 Workshop on Relational Models of Semantics",
"volume": "",
"issue": "",
"pages": "54--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Tonelli and Rodolfo Delmonte. 2011. Desperately seeking implicit arguments in text. In Proceedings of the ACL 2011 Workshop on Relational Models of Se- mantics, pages 54-62, Portland, Oregon, USA, June.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Data Mining: Practical Machine Learning Tools and Techniques",
"authors": [
{
"first": "H",
"middle": [],
"last": "Ian",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Witten",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian H. Witten and Eibe Frank. 2005. Data Mining: Prac- tical Machine Learning Tools and Techniques. Mor- gan Kaufmann, San Francisco, California, USA, 2nd edition.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "More accurate tests for the statistical significance of result differences",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Yeh",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 18th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "947--953",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Yeh. 2000. More accurate tests for the sta- tistical significance of result differences. In Proceed- ings of the 18th International Conference on Computa- tional Linguistics, pages 947-953, Saarbr\u00fccken, Ger- many, August.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Illustration of the induction approach: texts consist of PAS (represented by overlapping circles)",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Texts as displayed to the annotators.",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "? The remaining contraband was picked up at Le Havre. The containers had arrived [in Le Havre] from China. (5) ? Lt.-Gen. Mohamed Lamari (. . . ) denied his country wanted South African weapons to fight Muslim rebels fighting the government. \"We are not going to fight a flea with",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"num": null,
"html": null,
"content": "<table><tr><td>Sentence that comprises a PAS with an (correctly predicted) implicit argument</td><td>induced antecedent</td></tr><tr><td>The [\u2205</td><td/></tr><tr><td/><td>). 1 This data</td></tr></table>",
"type_str": "table",
"text": "A0 ] [operating A3 ] loss, as measured by . . . widened to 189 million euros . . . T-Online['s] It was handed over to Mozambican control . . . 33 years after [\u2205 A0 ] independence. Mozambique['s] . . . [local officials A0 ] failed to immediately report [the accident A1 ] [\u2205 A2 ] . . . [to] the government"
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Three positive examples of automatically induced implicit argument and antecedent pairs."
},
"TABREF2": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Sentence that comprises a PAS with an (incorrectly predicted) implicit argument induced antecedent (1) .. [Statistics * ] released [Tuesday T M P ] [\u2205 A0 ] showed the death toll dropped . . . official statistics (2) A [French LOC * ] [\u2205 A0 ] draft resolution . . . demands full . . . compliance . . . France (3) An earthquake . . . is capable of causing .. [heavy EXT ] damage [\u2205 A2 * ] major"
},
"TABREF4": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": ""
},
"TABREF6": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "[The Dalai Lama's A0 ] visit coincides with the Beijing Olympics. The Dalai Lama's A0 ] visit [to France A1 ] coincides with the Beijing Olympics."
},
"TABREF8": {
"num": null,
"html": null,
"content": "<table><tr><td>: Results in terms of precision (P), recall (R) and</td></tr><tr><td>F 1 score for correctly predicting argument realization; re-</td></tr><tr><td>sults that significantly differ from our (full) model are</td></tr><tr><td>marked with asterisks (* p&lt;0.1; ** p&lt;0.01)</td></tr></table>",
"type_str": "table",
"text": ""
}
}
}
}