Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P12-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:27:31.164065Z"
},
"title": "Extracting Narrative Timelines as Temporal Dependency Structures",
"authors": [
{
"first": "Oleksandr",
"middle": [],
"last": "Kolomiyets",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "KU Leuven Celestijnenlaan",
"location": {
"postCode": "200A B-3001",
"settlement": "Heverlee",
"country": "Belgium"
}
},
"email": "[email protected]"
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado",
"location": {
"addrLine": "Campus",
"postBox": "Box 594",
"postCode": "80309",
"settlement": "Boulder",
"region": "CO",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "KU Leuven",
"location": {
"addrLine": "Celestijnenlaan 200A B",
"postCode": "3001",
"settlement": "Heverlee",
"country": "Belgium"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a new approach to characterizing the timeline of a text: temporal dependency structures, where all the events of a narrative are linked via partial ordering relations like BEFORE , AFTER, OVERLAP and IDENTITY. We annotate a corpus of children's stories with temporal dependency trees, achieving agreement (Krippendorff's Alpha) of 0.856 on the event words, 0.822 on the links between events, and of 0.700 on the ordering relation labels. We compare two parsing models for temporal dependency structures, and show that a deterministic non-projective dependency parser outperforms a graph-based maximum spanning tree parser, achieving labeled attachment accuracy of 0.647 and labeled tree edit distance of 0.596. Our analysis of the dependency parser errors gives some insights into future research directions.",
"pdf_parse": {
"paper_id": "P12-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a new approach to characterizing the timeline of a text: temporal dependency structures, where all the events of a narrative are linked via partial ordering relations like BEFORE , AFTER, OVERLAP and IDENTITY. We annotate a corpus of children's stories with temporal dependency trees, achieving agreement (Krippendorff's Alpha) of 0.856 on the event words, 0.822 on the links between events, and of 0.700 on the ordering relation labels. We compare two parsing models for temporal dependency structures, and show that a deterministic non-projective dependency parser outperforms a graph-based maximum spanning tree parser, achieving labeled attachment accuracy of 0.647 and labeled tree edit distance of 0.596. Our analysis of the dependency parser errors gives some insights into future research directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "There has been much recent interest in identifying events, times and their relations along the timeline, from event and time ordering problems in the Temp-Eval shared tasks (Verhagen et al., 2007; Verhagen et al., 2010) , to identifying time arguments of event structures in the Automated Content Extraction program (Linguistic Data Consortium, 2005; Gupta and Ji, 2009) , to timestamping event intervals in the Knowledge Base Population shared task (Artiles et al., 2011; Amig\u00f3 et al., 2011) .",
"cite_spans": [
{
"start": 173,
"end": 196,
"text": "(Verhagen et al., 2007;",
"ref_id": null
},
{
"start": 197,
"end": 219,
"text": "Verhagen et al., 2010)",
"ref_id": null
},
{
"start": 316,
"end": 350,
"text": "(Linguistic Data Consortium, 2005;",
"ref_id": null
},
{
"start": 351,
"end": 370,
"text": "Gupta and Ji, 2009)",
"ref_id": "BIBREF10"
},
{
"start": 450,
"end": 472,
"text": "(Artiles et al., 2011;",
"ref_id": "BIBREF0"
},
{
"start": 473,
"end": 492,
"text": "Amig\u00f3 et al., 2011)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, to date, this research has produced fragmented document timelines, because only specific types of temporal relations in specific contexts have been targeted. For example, the TempEval tasks only looked at relations between events in the same or adjacent sentences (Verhagen et al., 2007; Verhagen et al., 2010) , and the Automated Content Extraction program only looked at time arguments for specific types of events, like being born or transferring money.",
"cite_spans": [
{
"start": 273,
"end": 296,
"text": "(Verhagen et al., 2007;",
"ref_id": null
},
{
"start": 297,
"end": 319,
"text": "Verhagen et al., 2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this article, we propose an approach to temporal information extraction that identifies a single connected timeline for a text. The temporal language in a text often fails to specify a total ordering over all the events, so we annotate the timelines as temporal dependency structures, where each event is a node in the dependency tree, and each edge between nodes represents a temporal ordering relation such as BEFORE, AFTER, OVERLAP or IDENTITY. We construct an evaluation corpus by annotating such temporal dependency trees over a set of children's stories. We then demonstrate how to train a timeline extraction system based on dependency parsing techniques instead of the pair-wise classification approaches typical of prior work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this article are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a new approach to characterizing temporal structure via dependency trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We produce an annotated corpus of temporal dependency trees in children's stories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We design a non-projective dependency parser for inferring timelines from text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The following sections first review some relevant prior work, then describe the corpus annotation and the dependency parsing algorithm, and finally present our evaluation results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Much prior work on the annotation of temporal information has constructed corpora with incomplete timelines. The TimeBank (Pustejovsky et al., 2003b; Pustejovsky et al., 2003a ) provided a corpus annotated for all events and times, but temporal relations were only annotated when the relation was judged to be salient by the annotator. In the TempEval competitions (Verhagen et al., 2007; Verhagen et al., 2010) , annotated texts were provided for a few different event and time configurations, for example, an event and a time in the same sentence, or two main-clause events from adjacent sentences. Bethard et al. (2007) proposed to annotate temporal relations one syntactic construction at a time, producing an initial corpus of only verbal events linked to events in subordinated clauses. One notable exception to this pattern of incomplete timelines is the work of Bramsen et al. (2006) where temporal structures were annotated as directed acyclic graphs. However they worked on a much coarser granularity, annotating not the ordering between individual events, but between multisentence segments of text.",
"cite_spans": [
{
"start": 122,
"end": 149,
"text": "(Pustejovsky et al., 2003b;",
"ref_id": "BIBREF16"
},
{
"start": 150,
"end": 175,
"text": "Pustejovsky et al., 2003a",
"ref_id": "BIBREF16"
},
{
"start": 365,
"end": 388,
"text": "(Verhagen et al., 2007;",
"ref_id": null
},
{
"start": 389,
"end": 411,
"text": "Verhagen et al., 2010)",
"ref_id": null
},
{
"start": 601,
"end": 622,
"text": "Bethard et al. (2007)",
"ref_id": "BIBREF2"
},
{
"start": 870,
"end": 891,
"text": "Bramsen et al. (2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In part because of the structure of the available training corpora, most existing temporal information extraction models formulate temporal linking as a pair-wise classification task, where each pair of events and/or times is examined and classified as having a temporal relation or not. Early work on the TimeBank took this approach (Boguraev and Ando, 2005) , classifying relations between all events and times within 64 tokens of each other. Most of the topperforming systems in the TempEval competitions also took this pair-wise classification approach for both event-time and event-event temporal relations (Bethard and Martin, 2007; Cheng et al., 2007; UzZaman and Allen, 2010; Llorens et al., 2010) . Systems have also tried to take advantage of more global information to ensure that the pair-wise classifications satisfy temporal logic transitivity constraints, using frameworks such as integer linear programming and Markov logic networks (Bramsen et al., 2006; Chambers and Jurafsky, 2008; Yoshikawa et al., 2009; Uz-Zaman and Allen, 2010 ). Yet the basic approach is still centered around pair-wise classifications, not the complete temporal structure of a document.",
"cite_spans": [
{
"start": 334,
"end": 359,
"text": "(Boguraev and Ando, 2005)",
"ref_id": null
},
{
"start": 612,
"end": 638,
"text": "(Bethard and Martin, 2007;",
"ref_id": "BIBREF2"
},
{
"start": 639,
"end": 658,
"text": "Cheng et al., 2007;",
"ref_id": null
},
{
"start": 659,
"end": 683,
"text": "UzZaman and Allen, 2010;",
"ref_id": null
},
{
"start": 684,
"end": 705,
"text": "Llorens et al., 2010)",
"ref_id": null
},
{
"start": 949,
"end": 971,
"text": "(Bramsen et al., 2006;",
"ref_id": "BIBREF3"
},
{
"start": 972,
"end": 1000,
"text": "Chambers and Jurafsky, 2008;",
"ref_id": "BIBREF4"
},
{
"start": 1001,
"end": 1024,
"text": "Yoshikawa et al., 2009;",
"ref_id": "BIBREF16"
},
{
"start": 1025,
"end": 1049,
"text": "Uz-Zaman and Allen, 2010",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work builds upon this prior research, both improving the annotation approach to generate the fully connected timeline of a story, and improving the models for timeline extraction using dependency parsing techniques. We use the annotation scheme introduced in more detail in Bethard et. al. (2012) , which proposes to annotate temporal relations as dependency links between head events and dependent events. This annotation scheme addresses the issues of incoherent and incomplete annotations by guaranteeing that all events in a plot are connected along a single timeline. These connected timelines allow us to design new models for timeline extraction in which we jointly infer the temporal structure of the text and the labeled temporal relations. We employ methods from syntactic dependency parsing, adapting them to our task by including features typical of temporal relation labeling models.",
"cite_spans": [
{
"start": 278,
"end": 300,
"text": "Bethard et. al. (2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The corpus of stories for children was drawn from the fables collection of (McIntyre and Lapata, 2009) 1 and annotated as described in (Bethard et al., 2012) . In this section we illustrate the main annotation principles for coherent temporal annotation. As an example story, consider:",
"cite_spans": [
{
"start": 135,
"end": 157,
"text": "(Bethard et al., 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Annotation",
"sec_num": "3"
},
{
"text": "Two Travellers were on the road together, when a Bear suddenly appeared on the scene. Before he observed them, one made for a tree at the side of the road, and climbed up into the branches and hid there. The other was not so nimble as his companion; and, as he could not escape, he threw himself on the ground and pretended to be dead. . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Annotation",
"sec_num": "3"
},
{
"text": "[37.txt] Figure 1 shows the temporal dependency structure that we expect our annotators to identify in this story. The annotators were provided with guidelines both for which kinds of words should be identified as events, and for which kinds of events should be linked by temporal relations. For identifying event words, the standard TimeML guidelines for annotating events (Pustejovsky et al., 2003a) were augmented with two additional guidelines: Edges denote temporal relations signaled by linguistic cues in the text. Temporal relations that can be inferred via transitivity are not shown.",
"cite_spans": [
{
"start": 374,
"end": 401,
"text": "(Pustejovsky et al., 2003a)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 9,
"end": 17,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Corpus Annotation",
"sec_num": "3"
},
{
"text": "\u2022 Skip negated, modal or hypothetical events (e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Annotation",
"sec_num": "3"
},
{
"text": "could not escape, dead in pretended to be dead).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Annotation",
"sec_num": "3"
},
{
"text": "\u2022 For phrasal events, select the single word that best paraphrases the meaning (e.g. in used to snap the event should be snap, in kept perfectly still the event should be still).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Annotation",
"sec_num": "3"
},
{
"text": "For identifying the temporal dependencies (i.e. the ordering relations between event words), the annotators were instructed to link each event in the story to a single nearby event, similar to what has been observed in reading comprehension studies (Johnson-Laird, 1980; Brewer and Lichtenstein, 1982) . When there were several reasonable nearby events to choose from, the annotators were instructed to choose the temporal relation that was easiest to infer from the text (e.g. preferring relations with explicit cue words like before). A set of six temporal relations was used:",
"cite_spans": [
{
"start": 249,
"end": 270,
"text": "(Johnson-Laird, 1980;",
"ref_id": null
},
{
"start": 271,
"end": 301,
"text": "Brewer and Lichtenstein, 1982)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Annotation",
"sec_num": "3"
},
{
"text": "BEFORE, AFTER, INCLUDES, IS-INCLUDED, IDEN- TITY or OVERLAP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Annotation",
"sec_num": "3"
},
{
"text": "Two annotators annotated temporal dependency structures in the first 100 fables of the McIntyre-Lapata collection and measured inter-annotator agreement by Krippendorff's Alpha for nominal data (Krippendorff, 2004; Hayes and Krippendorff, 2007) . For the resulting annotated corpus annotators achieved Alpha of 0.856 on the event words, 0.822 on the links between events, and of 0.700 on the ordering relation labels. Thus, we concluded that the temporal dependency annotation paradigm was reliable, and the resulting corpus of 100 fables 2 could be used to",
"cite_spans": [
{
"start": 194,
"end": 214,
"text": "(Krippendorff, 2004;",
"ref_id": "BIBREF13"
},
{
"start": 215,
"end": 244,
"text": "Hayes and Krippendorff, 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Annotation",
"sec_num": "3"
},
{
"text": "We consider two different approaches to learning a temporal dependency parser: a shift-reduce model (Nivre, 2008) and a graph-based model (McDonald et al., 2005) . Both models take as input a sequence of event words and produce as output a tree structure where the events are linked via temporal relations. Formally, a parsing model is a function (W \u2192 \u03a0) where W = w 1 w 2 . . . w n is a sequence of event words, and \u03c0 \u2208 \u03a0 is a dependency tree \u03c0 = (V, E) where:",
"cite_spans": [
{
"start": 100,
"end": 113,
"text": "(Nivre, 2008)",
"ref_id": "BIBREF15"
},
{
"start": 138,
"end": 161,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": "\u2022 V = W \u222a {Root}, that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": "is, the vertex set of the graph is the set of words in W plus an artificial root node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": "\u2022 E = {(w h , r, w d ) : w h \u2208 V, w d \u2208 V, r \u2208 R = {BEFORE, AFTER, INCLUDES, IS INCLUDED,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": "IDENTITY, OVERLAP}}, that is, in the edge set of the graph, each edge is a link between a dependent word and its head word, labeled with a temporal relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": "\u2022 (w h , r, w d ) \u2208 E =\u21d2 w d = Root,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": "that is, the artificial root node has no head.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": "\u2022 (w h , r, w d ) \u2208 E =\u21d2 ((w h , r , w d ) \u2208 E =\u21d2 w h = w h \u2227 r = r )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": ", that is, for every node there is at most one head and one relation label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": "\u2022 E contains no (non-empty) subset of arcs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": "(w h , r i , w i ), (w i , r j , w j ), . . . , (w k , r l , w h ), that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": "is, there are no cycles in the graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "4"
},
{
"text": "Move all of L 2 and the head of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SHIFT",
"sec_num": null
},
{
"text": "Q onto L 1 ([a 1 . . . a i ], [b 1 . . . b j ], [w k w k+1 . . .], E) \u2192 ([a 1 . . . a i b 1 . . . b j w k ], [], [w k+1 . . .], E) NO-ARC Move the head of L 1 to the head of L 2 ([a 1 . . . a i a i+1 ], [b 1 . . . b j ], Q, E) \u2192 ([a 1 . . . a i ], [a i+1 b 1 . . . b j ], Q, E) LEFT-ARC",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SHIFT",
"sec_num": null
},
{
"text": "Create a relation where the head of L 1 depends on the head of Q Not applicable if a i+1 is the root or already has a head, or if there is a path connecting w k and a i+1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SHIFT",
"sec_num": null
},
{
"text": "([a 1 . . . a i a i+1 ], [b 1 . . . b j ], [w k . . .], E) \u2192 ([a 1 . . . a i ], [a i+1 b 1 . . . b j ], [w k . . .], E \u222a (w k , r, a i+1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SHIFT",
"sec_num": null
},
{
"text": "RIGHT-ARC Create a relation where the head of Q depends on the head of L 1 Not applicable if w k is the root or already has a head, or if there is a path connecting w k and a i+1 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SHIFT",
"sec_num": null
},
{
"text": "([a 1 . . . a i a i+1 ], [b 1 . . . b j ], [w k . . .], E) \u2192 ([a 1 . . . a i ], [a i+1 b 1 . . . b j ], [w k . . .], E \u222a (a i+1 , r, w k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SHIFT",
"sec_num": null
},
{
"text": "Shift-reduce dependency parsers start with an input queue of unlinked words, and link them into a tree by repeatedly choosing and performing actions like shifting a node to a stack, or popping two nodes from the stack and linking them. Shift-reduce parsers are typically defined in terms of configurations and a transition system, where the configurations describe the current internal state of the parser, and the transition system describes how to get from one state to another. Formally, a deterministic shift-reduce dependency parser is defined as (C, T, C F , INIT, TREE) where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "\u2022 C is the set of possible parser configurations c i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "\u2022 T \u2286 (C \u2192 C) is the set of transitions t i from one configuration c j to another c j+1 allowed by the parser",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "\u2022 INIT \u2208 (W \u2192 C)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "is a function from the input words to an initial parser configuration",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "\u2022 C F \u2286 C are the set of final parser configurations c F where the parser is allowed to terminate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "\u2022 TREE \u2208 (C F \u2192 \u03a0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "is a function that extracts a dependency tree \u03c0 from a final parser state c F Given this formalism and an oracle o \u2208 (C \u2192 T ), which can choose a transition given the current configuration of the parser, dependency parsing can be accomplished by Algorithm 1. For temporal dependency parsing, we adopt the Covington set of transitions (Covington, 2001) as it allows for parsing the non-projective trees, which may also contain \"crossing\" edges, that occasionally occur in our annotated corpus. Our parser is therefore defined as:",
"cite_spans": [
{
"start": 334,
"end": 351,
"text": "(Covington, 2001)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "Algorithm 1 Deterministic parsing with an oracle. c \u2190 INIT(W ) while c / \u2208 C F do t \u2190 o(c) c \u2190 t(c) end while return TREE(c) \u2022 c = (L 1 , L 2 , Q, E) is a parser configuration,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "where L 1 and L 2 are lists for temporary storage, Q is the queue of input words, and E is the set of identified edges of the dependency tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "\u2022 T = {SHIFT,NO-ARC,LEFT-ARC,RIGHT-ARC} is the set of transitions described in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "\u2022 INIT(W ) = ([Root], [], [w 1 , w 2 , . . . , w n ], \u2205)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "puts all input words on the queue and the artificial root on L 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "\u2022 C F = {(L 1 , L 2 , Q, E) \u2208 C : L 1 = {W \u222a {Root}}, L 2 = Q = \u2205} accepts final states",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "where the input words have been moved off of the queue and lists and into the edges in E.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "\u2022 TREE((L 1 , L 2 , Q, E)) = (W \u222a {Root}, E) ex- tracts the final dependency tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "The oracle o is typically defined as a machine learning classifier, which characterizes a parser configuration c in terms of a set of features. For temporal dependency parsing, we learn a Support Vector Machine classifier (Yamada and Matsumoto, 2003) using the features described in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce Parsing Model",
"sec_num": "4.1"
},
{
"text": "One shortcoming of the shift-reduce dependency parsing approach is that each transition decision Figure 2 : A setting for the graph-based parsing model: an initial dense graph G (left) with edge scores SCORE(e). The resulting dependency tree as a spanning tree with the highest score over the edges (right) . made by the model is final, and cannot be revisited to search for more globally optimal trees. Graph-based models are an alternative dependency parsing model, which assembles a graph with weighted edges between all pairs of words, and selects the tree-shaped subset of this graph that gives the highest total score (Fig. 2) . Formally, a graph-based parser follows Algorithm 2, where:",
"cite_spans": [
{
"start": 299,
"end": 306,
"text": "(right)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 97,
"end": 105,
"text": "Figure 2",
"ref_id": null
},
{
"start": 624,
"end": 632,
"text": "(Fig. 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graph-Based Parsing Model",
"sec_num": "4.2"
},
{
"text": "\u2022 W = W \u222a {Root} \u2022 SCORE \u2208 ((W \u00d7R\u00d7W ) \u2192 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Based Parsing Model",
"sec_num": "4.2"
},
{
"text": "is a function for scoring edges",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Based Parsing Model",
"sec_num": "4.2"
},
{
"text": "\u2022 SPANNINGTREE is a function for selecting a subset of edges that is a tree that spans over all the nodes of the graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Based Parsing Model",
"sec_num": "4.2"
},
{
"text": "Algorithm 2 Graph-based dependency parsing E \u2190 {(e, SCORE(e)) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Based Parsing Model",
"sec_num": "4.2"
},
{
"text": "e \u2208 (W \u00d7R\u00d7W ))} G \u2190 (W , E) return SPANNINGTREE(G)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Based Parsing Model",
"sec_num": "4.2"
},
{
"text": "The SPANNINGTREE function is usually defined using one of the efficient search techniques for finding a maximum spanning tree. For temporal dependency parsing, we use the Chu-Liu-Edmonds algorithm (Chu and Liu, 1965; Edmonds, 1967) which solves this problem by iteratively selecting the edge with the highest weight and removing edges that would create cycles. The result is the globally optimal maximum spanning tree for the graph (Georgiadis, 2003) .",
"cite_spans": [
{
"start": 197,
"end": 216,
"text": "(Chu and Liu, 1965;",
"ref_id": "BIBREF5"
},
{
"start": 217,
"end": 231,
"text": "Edmonds, 1967)",
"ref_id": "BIBREF9"
},
{
"start": 432,
"end": 450,
"text": "(Georgiadis, 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Based Parsing Model",
"sec_num": "4.2"
},
{
"text": "The SCORE function is typically defined as a machine learning model that scores an edge based on a set of features. For temporal dependency parsing, we learn a model to predict edge scores via the Margin Infused Relaxed Algorithm (MIRA) (Crammer and Singer, 2003; Crammer et al., 2006) using the set of features defined in Section 5.",
"cite_spans": [
{
"start": 237,
"end": 263,
"text": "(Crammer and Singer, 2003;",
"ref_id": "BIBREF7"
},
{
"start": 264,
"end": 285,
"text": "Crammer et al., 2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-Based Parsing Model",
"sec_num": "4.2"
},
{
"text": "The proposed parsing algorithms both rely on machine learning methods. The shift-reduce parser (SRP) trains a machine learning classifier as the oracle o \u2208 (C \u2192 T ) to predict a transition t from a parser configuration c = (L 1 , L 2 , Q, E), using node features such as the heads of L 1 , L 2 and Q, and edge features from the already predicted temporal relations in E. The graph-based maximum spanning tree (MST) parser trains a machine learning model to predict SCORE(e) for an edge e = (w i , r j , w k ), using features of the nodes w i and w k . The full set of features proposed for both parsing models, derived from the state-of-the-art systems for temporal relation labeling, is presented in Table 2 . Note that both models share features that look at the nodes, while only the shift-reduce parser has features for previously classified edges.",
"cite_spans": [],
"ref_spans": [
{
"start": 701,
"end": 708,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Feature Design",
"sec_num": "5"
},
{
"text": "Evaluations were performed using 10-fold crossvalidation on the fables annotated in Section 3. The corpus contains 100 fables, a total of 14,279 tokens and a total of 1136 annotated temporal relations. As only 40 instances of OVERLAP relations were annotated when neither INCLUDES nor IS INCLUDED label matched, for evaluation purposes all instances of these relations were merged into the temporally coarse OVERLAP relation. Thus, the total number of OVERLAP relations in the corpus grew from 40 to 258 annotations in total. To evaluate the parsing models (SRP and MST) we proposed two baselines. Both are based on the assumption of linear temporal structures of narratives as the temporal ordering process that was evidenced by studies in human text rewriting (Hickmann, 2003) . The proposed baselines are:",
"cite_spans": [
{
"start": 762,
"end": 778,
"text": "(Hickmann, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations",
"sec_num": "6"
},
{
"text": "\u2022 LinearSeq: A model that assumes all events occur in the order they are written, adding links between each pair of adjacent events, and labeling all links with the relation BEFORE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations",
"sec_num": "6"
},
{
"text": "\u2022 ClassifySeq: A model that links each pair of adjacent events, but trains a pair-wise classifier to predict the relation label for each pair. The classifier is a support vector machine trained using the same features as the MST parser. This is an approximation of prior work, where the pairs of events to classify with a temporal relation were given as an input to the system. (Note that Section 6.2 will show that for our corpus, applying the model only to adjacent pairs of events is quite competitive for just getting the basic unlabeled link structure right.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations",
"sec_num": "6"
},
{
"text": "The Shift-Reduce parser (SRP; Section 4.1) and the graph-based, maximum spanning tree parser (MST; Section 4.2) are compared to these baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations",
"sec_num": "6"
},
{
"text": "Model performance was evaluated using standard evaluation criteria for parser evaluations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria and Metrics",
"sec_num": "6.1"
},
{
"text": "Unlabeled Attachment Score (UAS) The fraction of events whose head events were correctly predicted. This measures whether the correct pairs of events were linked, but not if they were linked by the correct relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria and Metrics",
"sec_num": "6.1"
},
{
"text": "Labeled Attachment Score (LAS) The fraction of events whose head events were correctly predicted with the correct relations. This measures both whether the correct pairs of events were linked and whether their temporal ordering is correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria and Metrics",
"sec_num": "6.1"
},
{
"text": "Tree Edit Distance In addition to the UAS and LAS the tree edit distance score has been recently introduced for evaluating dependency structures (Tsarfaty et al., 2011). The tree edit distance score for a tree \u03c0 is based on the following operations \u03bb \u2208 \u039b : \u039b = {DELETE, INSERT, RELABEL}:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria and Metrics",
"sec_num": "6.1"
},
{
"text": "\u2022 \u03bb =DELETE delete a non-root node v in \u03c0 with parent u, making the children of v the children of u, inserted in the place of v as a subsequence in the left-to-right order of the children of u.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria and Metrics",
"sec_num": "6.1"
},
{
"text": "\u2022 \u03bb =INSERT insert a node v as a child of u in \u03c0 making it the parent of a consecutive subsequence of the children of u.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria and Metrics",
"sec_num": "6.1"
},
{
"text": "\u2022 \u03bb =RELABEL change the label of node v in \u03c0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria and Metrics",
"sec_num": "6.1"
},
{
"text": "Any two trees \u03c0 1 and \u03c0 2 can be turned one into another by a sequence of edit operations {\u03bb 1 , ..., \u03bb n }. Taking the shortest such sequence, the tree edit distance is calculated as the sum of the edit operation costs divided by the size of the tree (i.e. the number of words in the sentence). For temporal dependency trees, we assume each operation costs 1.0. The final score subtracts the edit distance from 1 so that a perfect tree has score 1.0. The labeled tree edit distance score (LTEDS) calculates sequences over the tree with all its labeled temporal relations, while the unlabeled tree edit distance score (UTEDS) treats all edges as if they had the same label. Table 3 shows the results of the evaluation. The unlabeled attachment score for the LinearSeq baseline was 0.830, suggesting that annotators were most often linking adjacent events. At the same time, the labeled attachment score was 0.581, indicating that even in fables, the stories are not simply linear, that is, there are many relations other than BEFORE. The ClassifySeq baseline performs identically to the LinearSeq baseline, which shows that the simple pairwise classifier was unable to learn anything beyond predicting all relations as BEFORE.",
"cite_spans": [],
"ref_spans": [
{
"start": 674,
"end": 681,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluation Criteria and Metrics",
"sec_num": "6.1"
},
{
"text": "In terms of labeled attachment score, both dependency parsing models outperformed the baseline models -the maximum spanning tree parser achieved 0.614 LAS, and the shift-reduce parser achieved 0.647 LAS. The shift-reduce parser also outperformed the baseline models in terms of labeled tree edit distance, achieving 0.596 LTEDS vs. the baseline 0.549 LTEDS. These results indicate that dependency parsing models are a good fit to our wholestory timeline extraction task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "Finally, in comparing the two different dependency parsing models, we observe that the shiftreduce parser outperforms the maximum spanning tree parser in terms of labeled attachment score (0.647 vs. 0.614). It has been argued that graphbased models like the maximum spanning tree parser should be able to produce more globally consistent and correct dependency trees, yet we do not observe that here. A likely explanation for this phenomenon is that the shift-reduce parsing model allows for features describing previous parse decisions (similar to the incremental nature of human parse decisions), while the joint nature of the maximum spanning tree parser does not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "To better understand the errors our model is still making, we examined two folds (55 errors in total in 20% of the evaluation data) and identified the major categories of errors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.3"
},
{
"text": "\u2022 OVERLAP \u2192 BEFORE:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.3"
},
{
"text": "The model predicts the correct head, but predicts its label as BEFORE, while the correct label is OVERLAP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.3"
},
{
"text": "\u2022 Attach to further head: The model predicts the wrong head, and predicts as the head an event that is further away than the true head.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.3"
},
{
"text": "\u2022 Attach to nearer head: The model predicts the wrong head, and predicts as the head an event that is closer than the true head. Table 4 shows the distribution of the errors over these categories. The two most common types of errors, OVERLAP \u2192 BEFORE and Attach to further head, account for 76.4% of all the errors. The most common type of error is predicting a BEFORE relation when the correct answer is an OVERLAP relation. Figure 3 shows an example of such an error, where the model predicts that the Spendthrift stood before he saw, while the annotator indicates that the seeing happened during the time in which he was standing. An analysis of these OVERLAP \u2192 BEFORE errors suggests that they occur in scenarios like this one, where the duration of one event is significantly longer than the duration of another, but there are no direct cues for these duration differences. We also observe these types of errors when one event has many sub-events, and therefore the duration of the main event typically includes the durations of all the sub-events. It might be possible to address these kinds of errors by incorporating automatically extracted event duration information (Pan et al., 2006; Gusev et al., 2011) .",
"cite_spans": [
{
"start": 1176,
"end": 1194,
"text": "(Pan et al., 2006;",
"ref_id": "BIBREF16"
},
{
"start": 1195,
"end": 1214,
"text": "Gusev et al., 2011)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 426,
"end": 434,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.3"
},
{
"text": "The second most common error type of the model is the prediction of a head event that is further away than the head identified by the annotators. Figure 4 gives an example of such an error, where the model predicts that the gathering includes the smarting, instead of that the gathering includes the stung. The second error in the figure is also of the same type. In 65% of the cases where this type of error occurs, it occurs after the parser had already made a label classification error such as BEFORE \u2192 OVERLAP. So these errors may be in part due to the sequential nature of shift-reduce parsing, where early errors propagate and cause later errors.",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 154,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.3"
},
{
"text": "In this article, we have presented an approach to temporal information extraction that represents the time-line of a story as a temporal dependency tree. We have constructed an evaluation corpus where such temporal dependencies have been annotated over a set of 100 children's stories. We have introduced two dependency parsing techniques for extracting story timelines and have shown that both outperform a rulebased baseline and a prior-work-inspired pair-wise classification baseline. Comparing the two dependency parsing models, we have found that a shiftreduce parser, which more closely mirrors the incremental processing of our human annotators, outperforms a graph-based maximum spanning tree parser. Our error analysis of the shift-reduce parser revealed that being able to estimate differences in event durations may play a key role in improving parse quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "7"
},
{
"text": "We have focused on children's stories in this study, in part because they typically have simpler temporal structures (though not so simple that our rule-based baseline could parse them accurately). In most of our fables, there were only one or two characters with at most one or two simultaneous sequences of actions. In other domains, the timeline of a text is likely to be more complex. For example, in clinical records, descriptions of patients may jump back and forth between the patient history, the current examination, and procedures that have not yet happened.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "7"
},
{
"text": "In future work, we plan to investigate how to best apply the dependency structure approach to such domains. One approach might be to first group events into their narrative containers (Pustejovsky and Stubbs, 2011), for example, grouping together all events linked to the time of a patient's examination. Then within each narrative container, our dependency parsing approach could be applied. Another approach might be to join the individual timeline trees into a document-wide tree via discourse relations or relations to the document creation time. Work on how humans incrementally process such timelines in text may help to decide which of these approaches holds the most promise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "7"
},
{
"text": "Data available at http://homepages.inf.ed.ac. uk/s0233364/McIntyreLapata09/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available from http://www.bethard.info/data/ fables-100-temporal-dependency.xml train a temporal dependency parsing model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their constructive comments. This research was partially funded by the TERENCE project (EU FP7-257410) and the PARIS project (IWT SBO 110067).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An evaluation framework for aggregated temporal information extraction",
"authors": [
{
"first": "",
"middle": [],
"last": "Amig\u00f3",
"suffix": ""
}
],
"year": 2011,
"venue": "SIGIR-2011 Workshop on Entity-Oriented Search",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amig\u00f3 et al.2011] Enrique Amig\u00f3, Javier Artiles, Qi Li, and Heng Ji. 2011. An evaluation framework for aggre- gated temporal information extraction. In SIGIR-2011 Workshop on Entity-Oriented Search. [Artiles et al.2011] Javier Artiles, Qi Li, Taylor Cas- sidy, Suzanne Tamang, and Heng Ji. 2011.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "-KBP2011 temporal slot filling system description",
"authors": [
{
"first": "",
"middle": [],
"last": "Cuny Blender",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tac",
"suffix": ""
}
],
"year": null,
"venue": "Text Analytics Conference (TAC2011)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CUNY BLENDER TAC-KBP2011 temporal slot fill- ing system description. In Text Analytics Conference (TAC2011).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Finding temporal structure in text: Machine learning of syntactic temporal relations",
"authors": [
{
"first": "Martin2007] Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "1",
"issue": "",
"pages": "441--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Martin2007] Steven Bethard and James H. Martin. 2007. CU-TMP: Temporal relation classifica- tion using syntactic and semantic features. In Proceed- ings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 129-132, Prague, Czech Republic, June. ACL. [Bethard et al.2007] Steven Bethard, James H. Martin, and Sara Klingenstein. 2007. Finding temporal structure in text: Machine learning of syntactic temporal relations. International Journal of Semantic Computing (IJSC), 1(4):441-458, 12. [Bethard et al.2012] Steven Bethard, Oleksandr Kolomiyets, and Marie-Francine Moens. 2012.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Annotating narrative timelines as temporal dependency structures",
"authors": [
{
"first": "",
"middle": [],
"last": "Istanbul",
"suffix": ""
},
{
"first": "May",
"middle": [],
"last": "Turkey",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Elra. ; P. Bramsen",
"suffix": ""
},
{
"first": "Y",
"middle": [
"K"
],
"last": "Deshpande",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 1982,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "6",
"issue": "",
"pages": "473--486",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annotating narrative timelines as temporal dependency structures. In Proceedings of the International Conference on Linguistic Resources and Evaluation, Istanbul, Turkey, May. ELRA. [Boguraev and Ando2005] Branimir Boguraev and Rie Kubota Ando. 2005. TimeBank-driven TimeML analysis. In Annotating, Extracting and Reasoning about Time and Events. Springer. [Bramsen et al.2006] P. Bramsen, P. Deshpande, Y.K. Lee, and R. Barzilay. 2006. Inducing temporal graphs. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 189- 198. ACL. [Brewer and Lichtenstein1982] William F. Brewer and Ed- ward H. Lichtenstein. 1982. Stories are to entertain: A structural-affect theory of stories. Journal of Pragmat- ics, 6(5-6):473 -486.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Japan: Temporal relation identification using dependency parsed tree",
"authors": [
{
"first": "N",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "245--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Chambers and Jurafsky2008] N. Chambers and D. Juraf- sky. 2008. Jointly combining implicit constraints im- proves temporal ordering. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 698-706. ACL. [Cheng et al.2007] Yuchang Cheng, Masayuki Asahara, and Yuji Matsumoto. 2007. NAIST.Japan: Tempo- ral relation identification using dependency parsed tree. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 245-248, Prague, Czech Republic, June. ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "On the shortest arborescence of a directed graph",
"authors": [
{
"first": "]",
"middle": [
"Y J"
],
"last": "Liu1965",
"suffix": ""
},
{
"first": "T",
"middle": [
"H"
],
"last": "Chu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 1965,
"venue": "Science Sinica",
"volume": "",
"issue": "",
"pages": "1396--1400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Liu1965] Y. J. Chu and T.H. Liu. 1965. On the shortest arborescence of a directed graph. Science Sinica, pages 1396-1400.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A fundamental algorithm for dependency parsing",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Covington",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual ACM Southeast Conference",
"volume": "",
"issue": "",
"pages": "95--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M.A. Covington. 2001. A fundamental algorithm for dependency parsing. In Proceedings of the 39th Annual ACM Southeast Conference, pages 95-102.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Ultraconservative online algorithms for multiclass problems",
"authors": [
{
"first": "[",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "]",
"middle": [
"K"
],
"last": "Singer2003",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "951--991",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Crammer and Singer2003] K. Crammer and Y. Singer. 2003. Ultraconservative online algorithms for multi- class problems. Journal of Machine Learning Research, 3:951-991.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Online passive-aggressive algorithms",
"authors": [
{
"first": "[",
"middle": [],
"last": "Crammer",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Machine Learning Research",
"volume": "7",
"issue": "",
"pages": "551--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Crammer et al.2006] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. 2006. Online passive-aggressive algorithms. Journal of Machine Learning Research, 7:551-585.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Optimum branchings",
"authors": [
{
"first": "J",
"middle": [],
"last": "Edmonds",
"suffix": ""
}
],
"year": 1967,
"venue": "Journal of Research of the National Bureau of Standards",
"volume": "",
"issue": "",
"pages": "233--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Stan- dards, pages 233-240.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Predicting unknown time arguments based on crossevent propagation",
"authors": [
{
"first": "L",
"middle": [],
"last": "Georgiadis",
"suffix": ""
},
{
"first": "Prashant",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2003,
"venue": "Arborescence optimization problems solvable by Edmonds' algorithm. Theoretical Computer Science",
"volume": "301",
"issue": "",
"pages": "145--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Georgiadis. 2003. Arborescence op- timization problems solvable by Edmonds' algorithm. Theoretical Computer Science, 301(1-3):427-437. [Gupta and Ji2009] Prashant Gupta and Heng Ji. 2009. Predicting unknown time arguments based on cross- event propagation. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, ACLShort '09, pages 369-372, Stroudsburg, PA, USA. ACL. [Gusev et al.2011] Andrey Gusev, Nathanael Chambers, Divye Raj Khilnani, Pranav Khaitan, Steven Bethard, and Dan Jurafsky. 2011. Using query patterns to learn the duration of events. In Proceedings of the Interna- tional Conference on Computational Semantics, pages 145-154.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Answering the call for a standard reliability measure for coding data",
"authors": [
{
"first": "Krippendorff2007] A",
"middle": [
"F"
],
"last": "Hayes",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Hayes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 2007,
"venue": "Communication Methods and Measures",
"volume": "1",
"issue": "1",
"pages": "77--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Hayes and Krippendorff2007] A.F. Hayes and K. Krip- pendorff. 2007. Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 1(1):77-89.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Children's Discourse: Person, Space and Time Across Languages",
"authors": [
{
"first": "Maya",
"middle": [],
"last": "Hickmann",
"suffix": ""
}
],
"year": 1980,
"venue": "Cognitive Science",
"volume": "4",
"issue": "1",
"pages": "71--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maya Hickmann. 2003. Children's Dis- course: Person, Space and Time Across Languages. Cambridge University Press, Cambridge, UK. [Johnson-Laird1980] P.N. Johnson-Laird. 1980. Men- tal models in cognitive science. Cognitive Science, 4(1):71-115.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "ACE (Automatic Content Extraction) English annotation guidelines for events version 5",
"authors": [
{
"first": "K",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "284--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Krippendorff. 2004. Content anal- ysis: An introduction to its methodology. Sage Publica- tions, Inc. [Linguistic Data Consortium2005] Linguistic Data Con- sortium. 2005. ACE (Automatic Content Extraction) English annotation guidelines for events version 5.4.3 2005.07.01. [Llorens et al.2010] Hector Llorens, Estela Saquete, and Borja Navarro. 2010. TIPSem (English and Spanish): Evaluating CRFs and semantic roles in TempEval-2. In Proceedings of the 5th International Workshop on Se- mantic Evaluation, pages 284-291, Uppsala, Sweden, July. ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning to tell tales: A data-driven approach to story generation",
"authors": [
{
"first": "[",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "1",
"issue": "",
"pages": "217--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[McDonald et al.2005] R. McDonald, F. Pereira, K. Rib- arov, and J. Haji\u010d. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 523-530. ACL. [McIntyre and Lapata2009] N. McIntyre and M. Lapata. 2009. Learning to tell tales: A data-driven approach to story generation. In Proceedings of the Joint Confer- ence of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 217-225. ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Algorithms for deterministic incremental dependency parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "513--553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nivre. 2008. Algorithms for determinis- tic incremental dependency parsing. Computational Linguistics, 34(4):513-553.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Evaluating dependency parsing: Robust and heuristics-free cross-annotation evaluation",
"authors": [
{
"first": "Feng",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Rutu",
"middle": [],
"last": "Mulkar",
"suffix": ""
},
{
"first": "Jerry",
"middle": [
"R ; J"
],
"last": "Hobbs",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [],
"last": "Stubbs ; James Pustejovsky",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Casta\u00f1o",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Ingria",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Saur\u00fd",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "Graham Katz ; James",
"middle": [],
"last": "Setzer",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Hanks",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Saur\u00fd",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Setzer",
"suffix": ""
},
{
"first": "Beth",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sundheim",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Day",
"suffix": ""
},
{
"first": "Marcia",
"middle": [],
"last": "Ferro",
"suffix": ""
},
{
"first": ";",
"middle": [
"R"
],
"last": "Lazo",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "; K",
"middle": [],
"last": "Andersson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yoshikawa",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Asahara",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "13",
"issue": "",
"pages": "405--413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Pan et al.2006] Feng Pan, Rutu Mulkar, and Jerry R. Hobbs. 2006. Learning event durations from event descriptions. In Proceedings of the 21st International Conference on Computational Linguistics and 44th An- nual Meeting of the Association for Computational Lin- guistics, pages 393-400, Sydney, Australia, July. ACL. [Pustejovsky and Stubbs2011] J. Pustejovsky and A. Stubbs. 2011. Increasing informativeness in temporal annotation. In Proceedings of the 5th Linguistic Annotation Workshop, pages 152-160. ACL. [Pustejovsky et al.2003a] James Pustejovsky, Jos\u00e9 Casta\u00f1o, Robert Ingria, Roser Saur\u00fd, Robert Gaizauskas, Andrea Setzer, and Graham Katz. 2003a. TimeML: Robust specification of event and temporal expressions in text. In Proceedings of the Fifth International Workshop on Computational Semantics (IWCS-5), Tilburg. [Pustejovsky et al.2003b] James Pustejovsky, Patrick Hanks, Roser Saur\u00fd, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, and Marcia Lazo. 2003b. The TimeBank corpus. In Proceedings of Corpus Linguistics, pages 647-656. [Tsarfaty et al.2011] R. Tsarfaty, J. Nivre, and E. Ander- sson. 2011. Evaluating dependency parsing: Robust and heuristics-free cross-annotation evaluation. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 385-396. ACL. [UzZaman and Allen2010] Naushad UzZaman and James Allen. 2010. TRIPS and TRIOS system for TempEval- 2: Extracting temporal information from text. In Pro- ceedings of the 5th International Workshop on Seman- tic Evaluation, pages 276-283, Uppsala, Sweden, July. ACL. [Verhagen et al.2007] Marc Verhagen, Robert Gaizauskas, Frank Schilder, Graham Katz, and James Pustejovsky. 2007. SemEval2007 Task 15: TempEval temporal rela- tion identification. In SemEval-2007: 4th International Workshop on Semantic Evaluations. [Verhagen et al.2010] Marc Verhagen, Roser Saur\u00ed, Tom- maso Caselli, and James Pustejovsky. 2010. SemEval- 2010 Task 13: TempEval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation, Se- mEval '10, pages 57-62, Stroudsburg, PA, USA. ACL. [Yamada and Matsumoto2003] H. Yamada and Y. Mat- sumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT. [Yoshikawa et al.2009] K. Yoshikawa, S. Riedel, M. Asa- hara, and Y. Matsumoto. 2009. Jointly identifying temporal relations with Markov Logic. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 405-413. ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Event timeline for the story of the Travellers and the Bear. Nodes are events and edges are temporal relations.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "An OVERLAP \u2192 BEFORE parser error. True links are solid lines; the parser error is the dotted line.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "Parser errors attaching to further away heads. True links are solid lines; parser errors are dotted lines.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"type_str": "table",
"text": "Transition system for Covington-style shift-reduce dependency parsers.",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF2": {
"type_str": "table",
"text": "Features for the shift-reduce parser (SRP) and the graph-based maximum spanning tree (MST) parser. The \u221a",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF4": {
"type_str": "table",
"text": "Performance levels of temporal structure parsing methods. A * indicates that the model outperforms LinearSeq and ClassifiedSeq at p < 0.01 and a \u2020 indicates that the model outperforms MST at p < 0.05.",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF6": {
"type_str": "table",
"text": "Error distribution from the analysis of 55 errors of the Shift-Reduce parsing model.",
"num": null,
"html": null,
"content": "<table/>"
}
}
}
}