ACL-OCL / Base_JSON /prefixC /json /codi /2020.codi-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:27:16.664431Z"
},
"title": "Extending Implicit Discourse Relation Recognition to the PDTB-3",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Liang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The PDTB-3 contains many more implicit discourse relations than the previous PDTB-2. This is in part because implicit relations have now been annotated within sentences as well as between them. In addition, some now cooccur with explicit discourse relations, instead of standing on their own. Here we show that while this can complicate the problem of identifying the location of implicit discourse relations, it can in turn simplify the problem of identifying their senses. We present data to support this claim, as well as methods that can serve as a non-trivial baseline for future stateof-the-art recognizers for implicit discourse relations.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The PDTB-3 contains many more implicit discourse relations than the previous PDTB-2. This is in part because implicit relations have now been annotated within sentences as well as between them. In addition, some now cooccur with explicit discourse relations, instead of standing on their own. Here we show that while this can complicate the problem of identifying the location of implicit discourse relations, it can in turn simplify the problem of identifying their senses. We present data to support this claim, as well as methods that can serve as a non-trivial baseline for future stateof-the-art recognizers for implicit discourse relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most readers will be familiar with the PDTB-2 (Prasad et al., 2008) . At the time of its creation, it was the largest public repository of annotated discourse relations (over 43K), including over 18.4K signalled by explicit discourse connectives (coordinating or subordinating conjunctions, or discourse adverbials). In the corpus, discourse relations comprise two arguments labelled Arg1 and Arg2, with each relation anchored by either an explicit discourse connective or adjacency. In the latter case, annotators inserted one or more implicit connectives to signal the sense(s) they inferred to hold between the arguments. The size and availability of the PDTB-2 spawned work on shallow discourse parsing, as in the 2015 and 2016 CoNLL shared tasks (Xue et al., 2015 .",
"cite_spans": [
{
"start": 46,
"end": 67,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF15"
},
{
"start": 751,
"end": 768,
"text": "(Xue et al., 2015",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With the release of the PDTB-3 1 , there are now \u223c12.5K additional intra-sentential relations annotated (i.e., relations that lie wholly within the projection of a top-level S-node) and \u223c1K additional inter-sentential relations (Webber et al., 2019 Work on shallow discourse parsing (including the CoNLL shared tasks, as well as (Bai and Zhao, 2018; Dai and Huang, 2018; Rutherford et al., 2017; Shi and Demberg, 2017) ) consistently shows that recognizing and sense labelling implicit discourse relations poses more of a challenge than doing so for explicit discourse relations. Hence, implicit relations are the focus of the current work.",
"cite_spans": [
{
"start": 228,
"end": 248,
"text": "(Webber et al., 2019",
"ref_id": null
},
{
"start": 329,
"end": 349,
"text": "(Bai and Zhao, 2018;",
"ref_id": "BIBREF0"
},
{
"start": 350,
"end": 370,
"text": "Dai and Huang, 2018;",
"ref_id": "BIBREF4"
},
{
"start": 371,
"end": 395,
"text": "Rutherford et al., 2017;",
"ref_id": "BIBREF19"
},
{
"start": 396,
"end": 418,
"text": "Shi and Demberg, 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "But there is another reason as well: Work on the PDTB-2 has assumed (correctly) that non-explicit discourse relations (i.e., implicit relations, AltLex relations (Prasad et al., 2010) and entity relations) only hold between adjacent sentences as they did in the PDTB-2, so that a sentence boundary is the only position that needs to be checked for the presence of a non-explicit relation. The difficult problem lay in assigning sense-labels to implicit relations.",
"cite_spans": [
{
"start": 162,
"end": 183,
"text": "(Prasad et al., 2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 2, we show that, with the PDTB-3, this is no longer the case because non-explicit relations can hold within sentences as well as between them. This in turn motivates a new approach to handle implicit discourse relations in shallow discourse parsing, involving both finding them as well as identifying their senses (Section 3). After showing that the sense-distribution of implicit relations within sentences differs from that between them (cf. Section 4), we argue that one should be able to take advantage of this fact in sense-labelling these relations. 2 Section 5 describes two different ways of doing so, along with a way of dealing with another difference in sense distribution -that of implicit relations that co-occur with explicit relations and implicit relations that do not. While the particular methods used here for sense-labelling may not advance the state-of-the-art, it is the way we use them that should deliver a new baseline for recognizing a fuller range of implicit relations and contribute to the next generation of shallow discourse parsers. 3 2 Discourse Annotation in Discourse annotation in the PDTB-3 differs from that in the PDTB-2 in two major ways: (1) many more discourse relations are annotated within sentences, and (2) there are changes in the sense hierarchy used in annotating them. While only the first requires changes to shallow discourse parsing, presenting changes to the senses used in annotating relations will allow us to show differences in the distribution of senses associated with different types of implicit discourse relations.",
"cite_spans": [
{
"start": 1076,
"end": 1077,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It was a consequence of the way that the PDTB-2 was annotated, that there were over twice as many discourse relations annotated across sentences than within them. The former were either explicit relations associated with discourse adverbials or sentence-initial coordinating conjunctions 4 , or implicit relations between paragraph-internal adjacent sentences not otherwise linked by a discourse connective. Within sentences, only annotated were explicit relations associated with subordinating conjunctions, sentence-internal coordinating conjunctions, and discourse adverbials (both of whose arguments were in the same sentence). So it should not be surprising that there were many more intersentential relations than intra-sentential relations in the PDTB-2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Annotation in PDTB-3",
"sec_num": "2.1"
},
{
"text": "In contrast, of the over 13K additional discourse relations annotated in the PDTB-3, over 95% of them occur within individual sentences. Of the new relations, 5780 are implicit, some standing alone (like the implicit relations between sentences), with others co-occuring with an explicit discourse relation. Within a sentence, implicit relations occur at the boundaries of syntactic forms -for example, at the boundary between a free adjunct and its matrix clause (Ex. 1), or at the boundary between a to-clause and its matrix clause (Ex. 2), or between two punctuation-marked conjuncts (Ex. 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Annotation in PDTB-3",
"sec_num": "2.1"
},
{
"text": "3 It would not make sense to have separate processors for explicit discourse relations, as the decision process takes account of the discourse connective, thereby already learning whether the arguments are likely to occur across vs. within sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Annotation in PDTB-3",
"sec_num": "2.1"
},
{
"text": "4 Despite what people may have been taught, there are over 2100 tokens of sentence-initial \"But\" in the Penn WSJ corpus and over 660 tokens of sentence-initial \"And\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Annotation in PDTB-3",
"sec_num": "2.1"
},
{
"text": "(1) Treasury bonds got off to a strong start, advancing modestly during overnight trading on foreign markets. Conn=specifically (ARG2-AS-DETAIL) [wsj 0351]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Annotation in PDTB-3",
"sec_num": "2.1"
},
{
"text": "(2) After a bad start, Treasury bonds were buoyed by a late burst of buying, to end modestly higher.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Annotation in PDTB-3",
"sec_num": "2.1"
},
{
"text": "(3) Father McKenna moves through the house praying in Latin, urging the demon to split. (CONJUNCTION) [wsj 0413] Because implicit relations within sentences don't all occur at a single, well-defined position, this adds to the problems of shallow discourse parsing.",
"cite_spans": [
{
"start": 102,
"end": 112,
"text": "[wsj 0413]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conn=therefore (RESULT) [wsj 0400]",
"sec_num": null
},
{
"text": "In addition to stand-alone implicits in the PDTB-3, annotators were allowed to indicate implicit relations that co-occur with explicit relations (Rohde et al., 2017 (Rohde et al., , 2018 , as a way of indicating a relation that did not derive from the explicit connective, but rather from what the annotator inferred from the arguments themselves, as in Ex. 4-6: (4) We've got to get out of the Detroit mentality and Implicit=instead be part of the world mentality, declares Charles M. Jordan, GM's vice president for design . . . In Ex. 4, the annotators indicated that they inferred ARG2-AS-SUBST from the pair of arguments conjoined with and. The annotators took and itself to convey only that its arguments played the same role with respect to the prior text. It is the arguments themselves that led them to conclude that the second conjunct is meant to substitute for the first.",
"cite_spans": [
{
"start": 145,
"end": 164,
"text": "(Rohde et al., 2017",
"ref_id": "BIBREF17"
},
{
"start": 165,
"end": 186,
"text": "(Rohde et al., , 2018",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conn=therefore (RESULT) [wsj 0400]",
"sec_num": null
},
{
"text": "Similarly, in Ex. 5, the annotators indicated that they inferred the temporal relation PRECEDENCE from the pair of arguments conjoined with but. The annotators took but itself to convey CONCESSION. It is the arguments themselves that led the annotators to conclude that the second conjunct follows the first in time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conn=therefore (RESULT) [wsj 0400]",
"sec_num": null
},
{
"text": "Finally, in Ex. 6, the annotators indicated that they inferred a CONCESSION relation from the pair of arguments linked by without. The annotators took without itself (like its positive version with) to convey MANNER. It is only the arguments that led them to conclude that Arg2 denies an expectation raised by Arg1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conn=therefore (RESULT) [wsj 0400]",
"sec_num": null
},
{
"text": "In the PDTB-3, when two relations co-occur, they are explicitly linked through a shared index. The consequence for shallow discourse parsing is that explicit relations now need to be checked for co-occurence with an implicit relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conn=therefore (RESULT) [wsj 0400]",
"sec_num": null
},
{
"text": "The sense hierarchy used in annotating the PDTB-3 differs from that used in annotating the PDTB-2 in three ways:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Changes to the Sense Hierarchy",
"sec_num": "2.2"
},
{
"text": "1. Rare and/or difficult to annotate senses were dropped, as with the different types of conditional senses;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Changes to the Sense Hierarchy",
"sec_num": "2.2"
},
{
"text": "2. Sense relations at Level-3 now only encode directionality -for example, distinguishing ARG1-AS-SUBST (Ex. 7) from ARG2-AS-SUBST (Ex. 8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Changes to the Sense Hierarchy",
"sec_num": "2.2"
},
{
"text": "3. New senses were added that were found to be needed for annotating relations within sentences. More about the senses used in annotating the PDTB-3 can be found in Webber et al. (2019) . Senses are relevant to this discussion of implicit relations in shallow discourse parsing because (as set out in Section 4) implicit relations have been found to have different sense distributions depending on where they occur.",
"cite_spans": [
{
"start": 165,
"end": 185,
"text": "Webber et al. (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Changes to the Sense Hierarchy",
"sec_num": "2.2"
},
{
"text": "Both the PDTB-2 and PDTB-3 use stand-off annotation. What is relevant with respect to the experiments we report here, is what information is explicit in the annotation, as opposed to having to be computed. This information includes (1) the type of the relation (Explicit, Implicit, AltLex, Al-tLexC, Entity, Hypophora, NoRel); (2) the byte spans of the two arguments of the relation; and (3) the explicit index (aka link) of relations that co-occur by virtue of sharing the same or nearly the same arguments. The full field structure of discourse relations is set out in Section 8 of Webber et al. (2019) . What has to be recovered from the argument spans and the span of the projection of the top node in each sentence-level parse tree is whether a relation occurs wholely within a single sentence or involves multiple sentences.",
"cite_spans": [
{
"start": 584,
"end": 604,
"text": "Webber et al. (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stand-off annotation in the PDTB-3",
"sec_num": "2.3"
},
{
"text": "The sense classifiers for implicit relations used in this paper are based on a Basic Model whose properties reflect consideration of data size and the interaction between lexical information and structural information. (A full description of the Basic Model is given in Appendix A.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Model Architecture",
"sec_num": "3"
},
{
"text": "The architecture of Basic Model is shown in Figure 1 . It consists of two LSTMs (Hochreiter and Schmidhuber, 1997) and max-pooling layers, a hidden layer, a dense layer, and a softmax layer. Inputs to the model consist of pairs of discourse arguments, each represented as a sequence of word vectors. The output is a probability distribution of the senses between the discourse argument spans. The two sequences of word vectors are encoded by LSTMs in order to capture positional information within the sequential structure. Max-pooling on the output of the LSTMs is used to compose meaning and reduce parameters for the model, as it has been proven effective in Conneau et al. (2017) . Modeling the interaction between discourse arguments follows , who argue that discourse relations can only be determined by jointly analyzing the arguments. In addition, Rutherford et al. (2017) observed the influence of different configurations on the performance of the model for the implicit sense classification task, suggesting an interaction between the lexical information in word vectors and the structural information encoded in the model itself. We follow them in adopting a 300-dimension word2vec (Mikolov et al., 2013b) word embedding and hidden size of 100 for the Basic Model. Table 1 compares the distribution of intersentential and intra-sentential implicit relations with respect to the PDTB-3's Level-2 sense labels, along with the proportion of each label to the total inter-sentential and intra-sentential implicit relations. Besides differences in frequency -for example, relations expressing PURPOSE constitute 21.76% of intra-sentential implicit relations, while only 0.12% of inter-sentential implicits, while relations expressing INSTANTIATION constitute 8.89% of inter-sentential implicits, while only 1.4% of intra-sentential implicits -the senses of inter-sentential implicits are more unequally distributed. That is, three senses -CONTIN-GENCY.CAUSE, EXPANSION.CONJUNCTION and LEVEL-OF-DETAIL cover 67.08% of the intersentential implicits. In contrast, except for CON-TINGENCY.CAUSE and PURPOSE, most of the other intra-sentential implicits are more evenly distributed. As often happens with training on an imbalanced distribution, the unequal distribution of inter-sentential relations can lead the model to predict the majority class, ignoring minority classes.",
"cite_spans": [
{
"start": 80,
"end": 114,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 662,
"end": 683,
"text": "Conneau et al. (2017)",
"ref_id": "BIBREF3"
},
{
"start": 856,
"end": 880,
"text": "Rutherford et al. (2017)",
"ref_id": "BIBREF19"
},
{
"start": 1194,
"end": 1217,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 44,
"end": 52,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 1277,
"end": 1284,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Basic Model Architecture",
"sec_num": "3"
},
{
"text": "As for the 1753 implicits that co-occur with explicit relations, Table 2 shows that their sense distribution differs sharply from that of stand-alone implicit relations. For example, over 70% convey either CAUSE or ASYNCHRONOUS, while this holds of only 28.7% of stand-alone implicit relations. As such, linked implicits should be more predictable than stand-alone implicit relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 72,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Basic Model Architecture",
"sec_num": "3"
},
{
"text": "Differences in the distribution of implicit relations within sentences and across sentences suggest that we exploit this difference in sense-labelling implicit relations. In this section, we first assume that we know where implicit relations are located within a sentence, so that we can simply consider their arguments. We then present work we have done towards relaxing this assumption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-and intra-sentential Implicits",
"sec_num": "5"
},
{
"text": "Task 1: Consider the location of implicit relations in classification. There are different ways to take the location of implicit relations into consideration. Here we present two models, Model 1 (Section 5.2) and Model 2 (Section 5.3), both based on the basic model architecture described in Section 3. We compare them with the Basic Model, which uses the same classifier on all tokens. We compare their performance not just using the standard training-development-test split, where the ratio of inter-to intra-sentential implicits in the training set, WSJ section 2-21, is 12787:5014. In addition, we follow Shi and Demberg (2017) , who argue that evaluation through cross-validation is more predictive, given the wide variation in texts that appear in different sections of the Penn Wall Street Journal corpus. The average ratio of interto intra-sentential implicits in training sets of crossvalidation is 12747:4992. The scores of 3 models are weighted by the proportion of inter-and intrasentential tokens in the test set. Table 1 : Distribution of inter-sentential/intra-sentential implicit relations among Level 2 labels and the proportion of each label with respect to inter-sentential/intra-sentential implicit relations lations hold within sentences, two recognizers to identify implicit relations and find argument spans are provided. The first recognizer (Section 5.4) takes syntactic features to identify sentences that contain intra-sentential relations. The second recognizer (Section 5.5) exploits the properties that some explicit relations are linked with implicit relations, checking the explicit relations for cooccurrence with implicit relations to obtain the shared arguments.",
"cite_spans": [
{
"start": 609,
"end": 631,
"text": "Shi and Demberg (2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 1027,
"end": 1034,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inter-and intra-sentential Implicits",
"sec_num": "5"
},
{
"text": "The Basic Model uses the same classifier on all tokens. Since we know which tokens are intersentential and which are intra-sentential, we can compare how well the Basic Model does on each.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Model",
"sec_num": "5.1"
},
{
"text": "To compute the F 1 scores for the overall performance of the model, the scores of the model are combined, weighted by the proportion of inter-or intra-sentential tokens in the test set. This is shown on the first line of Table 3 , elaborated in the confusion matrix shown in Figure 2 . A Chi-squared test on the results show the performance of the Basic Model appears to depend to a statistically significant extent on whether the sense appears inter-or intra-sententially (p=1.50e-03).",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 228,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 275,
"end": 283,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Basic Model",
"sec_num": "5.1"
},
{
"text": "Model architecture: The idea behind Model 1 is to separate the classification task into intrasentential and inter-sentential implicit sense clas-sification, with separate classifiers for each. The model architecture and configuration of each classifier are the same as in the Basic Model (Section 3). We expect each classifier to capture different sense distributions of intra-sentential or inter-sentential implicits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1",
"sec_num": "5.2"
},
{
"text": "Training and evaluation: Based on their argument spans and the spans associated with each sentence in a file, tokens can be labeled as intersentential or intra-sentential. For the standard training-development-test framework, the tokens are allocated into separate inter-sentential/intrasentential training, development, and test sets. The inter-sentential training set is used in training the inter-sentential implicit sense classifier, and similarly for intra-sentential classification. Test set tokens labeled as inter-sentential or intra-sentential are fed into the appropriate classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1",
"sec_num": "5.2"
},
{
"text": "Results: The second line of Table 3 presents F 1 scores for Model 1 evaluated on the main evaluation test set and by cross-validation. It shows that Model 1 improves on the Basic Model in predicting intra-sentential implicit relations. The performance of the model significantly depends on the location of relations (p = 2.41e-09). The confusion matrix for Model 1 5 (cf. Figure 2) shows that labels with a relatively larger sample size in each set are predicted more often, includ- ing CONTINGENCY.PURPOSE (frequent in intrasentential implicits), EXPANSION.CONJUNCTION (frequent in inter-sentential implicits) and CONTIN-GENCY.CAUSE (frequent in both). The confusion matrix also shows that less frequent senses are confused with these frequent labels more often. Model 1 also reduces the ignorance problem of the Basic Model, in that it correctly classifies some samples into TEMPORAL.SYNCHRONOUS, which is a label ignored by the basic model.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 372,
"end": 381,
"text": "Figure 2)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Model 1",
"sec_num": "5.2"
},
{
"text": "Model architecture: Model 2 treats being intersentential or intra-sentential as a single binary feature. Model 2 is created by modifying the Basic Model to include this feature after obtaining the combined representations of the two arguments. We concatenate the binary feature f S with the output of the dense layer before applying the softmax function, expecting it to affect the final prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 2",
"sec_num": "5.3"
},
{
"text": "Training and evaluation: The data selection follows the standard and cross-validation data split process. The evaluation assumes that each token in the test set has been given an inter-sentential or intra-sentential feature. The scores are computed following the general process as the basic model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 2",
"sec_num": "5.3"
},
{
"text": "The third line of Table 3 shows that Model 2 improves over the Basic Model with respect to both inter-and intra-sentential implicit sense prediction, though the performance of the model still has a statistically significant dependence on the location of relations (p = 4.53e-04). The improvement of Model 2 on intra-sentential labels is not as dramatic as Model 1. Compared to the previous model, Model 2 doesn't sharpen its focus on those frequent labels in inter-or intra-sentential sets. Instead, the integrated feature in the representations distributes the benefits on the prediction ability of different labels more evenly. In addition, the confusion matrix in Figure 2 shows that Model 2 reduces the confusion between INSTANTIATION and LEVEL-OF-DETAIL, which Scholman and Demberg (2017) have hightlighted as a common source of confusion. The confusion matrix for Model 2 also shows some attention to less frequent labels such as COMPARISON.CONTRAST, which are not predicted in either the Basic Model or Model 1. ",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 667,
"end": 675,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results:",
"sec_num": null
},
{
"text": "The results presented above reflect \"gold knowledge\" of where implicit discourse relations hold within sentences. But in truth, their locations need to be identified before (or jointly with) labelling their senses. We have viewed this as a two-step process: Recognizing sentences that contain at least one implicit intra-sentential relation, and then recognizing the arguments to each relation. The first step has been implemented using a recognizer that takes a linearized parse tree of a sentences as the input. The second step is future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards finding implicits within sentences",
"sec_num": "5.4"
},
{
"text": "Model architecture: Similar to the Basic Model, inputs are represented as a sequence of word vectors, and word embeddings are initialized using pretrained fastText (Bojanowski et al., 2017) vectors (16B tokens). These vectors are fed to a BiLSTM whose outputs are then fed to a linear layer to produce a binary label, indicating the existence of at least one implicit intra-sentential relation. Word embeddings are set to 200, hidden dimensions, to 256, and vocabulary size, to 25k.",
"cite_spans": [
{
"start": 164,
"end": 189,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Towards finding implicits within sentences",
"sec_num": "5.4"
},
{
"text": "Training and evaluation: To train our recognizer, we first created a dataset of triplets comprising a sentence from PDTB-3, its corresponding parse tree, and a binary label. We obtain the parse trees from the Penn TreeBank (PTB - Marcus et al. 1993 ) and set the binary label to 1 if there exist at least one implicit or AltLex relation in that sentence. For example, the sentence in Ex. 9 is labelled 1, while that in Ex. 10 is labelled 0. Intra-sentential AltLex relations are included here because they are simply Implicit relations whose alternative lexicalization reliably signals its sensefor example, the phrases \"resulting in\", \"avoiding\", and \"contributing to\" are all taken to be alternative lexicalizations that reliably signal RESULT. This is not true of the earlier Examples 1-3, which are classed as Implicits. On the other hand, we do not label \"linked\" implicit relations as 1 because the visible evidence is an explicit connective signalling an explicit relation, and we don't want that to be taken per se as evidence for an implicit relation. For recognizing linked implicits, we have built a separate model which will be discussed in Section 5.5. Our training used the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 1e-4. We randomly split the dataset into training (60%), development (20%) and test (20%). To understand what happens if \"gold parse trees\" are not used, we also created variants of the dataset using parse trees from the widely used Berkeley parser (Kitaev and Klein, 2018) and Stanford parser (Manning et al., 2014) .",
"cite_spans": [
{
"start": 230,
"end": 248,
"text": "Marcus et al. 1993",
"ref_id": "BIBREF11"
},
{
"start": 1498,
"end": 1522,
"text": "(Kitaev and Klein, 2018)",
"ref_id": "BIBREF9"
},
{
"start": 1527,
"end": 1565,
"text": "Stanford parser (Manning et al., 2014)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Towards finding implicits within sentences",
"sec_num": "5.4"
},
{
"text": "Results: As the dataset is heavily imbalanced, we also added a simple baseline which predicts the most frequent label. Test set results of the recognizer on the three datasets are presented in Table 4 . Even though the baseline achieved an accuracy of \u223c0.9, it doesn't convey any useful information, as it labels all instances as 0. We can observe that the model with gold Penn TreeBank parse trees obtain the best performance, followed by the Berkeley parser. Stanford parse trees result in worst perfor- Table 4 : Results on task of identifying sentences that contain at least one intra-sentential relation, comparing gold parse trees from the PTB with the parse trees output by the Berkeley parser and by the Stanford parser.",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 200,
"text": "Table 4",
"ref_id": null
},
{
"start": 506,
"end": 513,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Towards finding implicits within sentences",
"sec_num": "5.4"
},
{
"text": "Baseline refers to the model that predicts the most frequent label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards finding implicits within sentences",
"sec_num": "5.4"
},
{
"text": "mance. Examining these trees led us to conclude that, while the Stanford parser does well for basic syntactic structures, which are the most common, it has trouble with challenging structures such as those associated with conjunction. An example is provided in Ex. 11. Here, \"steps\" has been incorrectly labelled NNS, when it is actually a VBZ, heading the second conjunct. If there were only two conjuncts, explicitly conjoined with \"and\", the sentence would not contain an implicit relation. With three conjuncts, however, the first two would normally be comma-conjoined, with the discourse relation between them taken to be implicit. But the error in PoS-tagging has eliminated evidence of a second conjunct, with an implicit discourse relation to the first conjunct. Errors in PoS-tagging and mis-parsing associated with rare constructions, means that the accuracy is lower than that of the Berkeley parser. However, as Precision, Recall, and F 1 are measured for 1 labels, these metrics are more adversely affected when compared to those of the Berkeley parser. Table 5 : Precision, Recall and F 1 scores of linked/stand-alone labels predicted by the recognizer using main evaluation metrics and their proportion in test data.",
"cite_spans": [],
"ref_spans": [
{
"start": 1067,
"end": 1074,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Towards finding implicits within sentences",
"sec_num": "5.4"
},
{
"text": "described in Section 5.4, we actually know the location of their arguments, because co-occurring (aka \"linked\") relations share their argument spans. Hence, recognizing explicit relations linked with implicit ones means that we also obtain argument spans of these implicits. Here we describe a first attempt to automatically discriminate explicit relations linked with implicit relations from ones that are not so linked. It comprises two steps: extracting sentences that contain explicit relations as our datasets, and then recognizing the ones linked with implicit relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards finding implicits within sentences",
"sec_num": "5.4"
},
{
"text": "Model architecture: To detect linked implicit relations from explicit relations, we use a naive Bayes classifier -specifically, the one provided in NLTK (Bird and Loper, 2004) . Production rules are selected as input feature as it has been proven notably effective in feature-based implicit discourse relation recognition task among different features (Park and Cardie, 2012) . Models trained in Task 1 will be adopted for linked sense classification.",
"cite_spans": [
{
"start": 153,
"end": 175,
"text": "(Bird and Loper, 2004)",
"ref_id": "BIBREF1"
},
{
"start": 352,
"end": 375,
"text": "(Park and Cardie, 2012)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Towards finding implicits within sentences",
"sec_num": "5.4"
},
{
"text": "We follow the standard split to select the training and test set. Each token in the training set consists of Arg1, connective and Arg2, and are parsed to extract syntactic productions used in parent-child nodes in the argument parse trees. The 100 most-frequent production rules are used to build a feature dictionary for input. A production rule feature is labeled as 1 in the dictionary if it appears in the parse tree of the token, otherwise it will be 0. The linked/stand-alone label is determined by whether the explicit relation shares the same index value with an implicit relation. The recognizer is evaluated by how well it distinguishes explicit relations that have a linked implicit relation from ones that don't. Classifiers are evaluated on the recognized implicit relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and evaluation:",
"sec_num": null
},
{
"text": "Results: The low Recall for linked relations in Table 5 shows that the recognizer performs better on predicting stand-alone relations, which are a majority of the data. Linked implicits in the test set (WSJ Section 23) are mostly linked to conjoined clauses or conjoined VPs, and are signaled by implicit connective like \"and\" (81.08%) or \"but\" or an adverbial. Most correctly recognized relations are VPs conjoined with \"and\". All the recognized linked implicit relations are found intra-sentential. We adopt the intra-sentential classifier in Model 1 and the Basic Model to test the classifier based on the recognized results. The intra-sentential classifier achieves an F 1 score of 75, compared with 68.182 using the Basic Model. This again emphasizes that knowing the location of implicit discourse relation would benefit sense identification.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training and evaluation:",
"sec_num": null
},
{
"text": "We have shown that recognizing implicit discourse relations as annotated in the PDTB-3 now requires finding them, as well as figuring out what sense relation(s) holds between the arguments. However, we have also shown that the latter task is simplified by differences in the sense distribution of different implicit relations. We still have to develop a way of recognizing precisely where implicit relations hold in those sentences that can be identified as containing them, and a more accurate approach to sense labelling implicit relations that co-occur with explicit ones. We are also interested in whether these different sense distributions hold in other news corpora and other genres. While it is likely not the case that all languages show the same difference in the sense distribution of discourse relations, we would not be surprised if the discourse relations realized within sentences differed from those realized across sentences. In conclusion, we hope that the current effort will contribute to future work on shallow discourse parsing as annotated in the PDTB-3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a 1 j = max k\u2208n 1 (H Arg2 j k ) (5) a 2 j = max k\u2208n 2 (H Arg1 j k ) (6) A Arg1 = [a 1 1 , a 1 2 , ..., a 1 hidden size ]",
"eq_num": "(7)"
}
],
"section": "Conclusion and future work",
"sec_num": "6"
},
{
"text": "A Arg2 = [a 2 1 , a 2 2 , ..., a 2 hidden size ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "6"
},
{
"text": "Inter-argument interaction modeling: The modeling of the interaction between two discourse argument representations follows , which argues that discourse relations can only be determined by jointly analyzing the arguments. In our model, argument representations A Arg1 and A Arg2 are weighted by W 1 and W 2 separately. The combination of the weighted argument representations is then transformed non-linearly with tanh function in the first hidden layer H hid . It is then fed into a dense layer H dense 7 . Finally, we predict the discourse relation sense using a softmax function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "6"
},
{
"text": "H hid = tanh(W 1 \u2022A Arg1 +W 2 \u2022A Arg2 +b hid ) (9) H dense = tanh(W dense \u2022 H 1 + b dense ) (10) output = sof tmax(W output \u2022 H dense + b output ) (11) A.2 Configuration Implementation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "6"
},
{
"text": "The model is implemented with PyTorch. The cost function is the standard crossentropy loss function and Adam optimizer with an initial learning rate of 0.001 and a batch size of 32. We determine convergence if the performance of the model on the development set does not improve after more than 3 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "6"
},
{
"text": "One problem that challenges the training of the model is the limitation on the size of the data. We introduce other resources to overcome it and adopt different techniques to avoid overfitting. Word vectors are directly taken from Word2vec embeddings (Mikolov et al., 2013a) trained with the skip-gram algorithm on Brown corpus, and are fixed during training. To avoid overfitting, we apply a 0.25 dropout ratio to the input of the LSTM layer. Batch normalization is added to normalize the activation between the hidden layer and the dense layer to accelerate the training speed and further prevent overfitting with regularization. Hyperparameter Settings: (Rutherford et al., 2017) observed the influence of different configurations on the performance of the model for the implicit sense classification task, suggesting an interaction between the lexical information in word vectors and the structural information encoded in the model itself. To determine the configuration for our model, we trained our model with different combinations of the dimension of word embedding (50, 300) and hidden size (50, 100), and evaluate it on Level 2 labels on the WSJ section 23. Table 6 presents the performance of the model with different configurations. The baseline is Most Frequent Sense heuristic, using the most frequent sense CONTINGENCY.CAUSE in the training data for each target. Our result is in line with their finding of sequential LSTM model, showing larger hidden size 100 is effective when it is accompanied with 300-dimension word embedding. Based on the performance on Level 2 labels, we choose 300dimension Word2vec word embedding and hidden size 100 as our configuration for the Basic Model.",
"cite_spans": [
{
"start": 251,
"end": 274,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF12"
},
{
"start": 657,
"end": 682,
"text": "(Rutherford et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1168,
"end": 1175,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "6"
},
{
"text": "Our model scores 34.778 at Level 3 (31-way classification). Using cross-validation, our model obtains 41.463 at Level 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "6"
},
{
"text": "It is worth examining the performance of the model on each Level 2 label individually. Table 7 displays the precision, recall and F 1 scores of each label along with its proportion in the test data.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 94,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.3 Discussion",
"sec_num": null
},
{
"text": "The classifier obtains relatively higher scores on some types of labels. The first type is senses with larger sample size in the corpus, suggesting the imbalanced classification problem. Two senses occur frequently in the corpus (CONTINGENCY.CAUSE and EXPANSION.CONJUNCTION) are recognized with high Recall, but low Precision. This could indicate a strong signal, but one that is likely to be ambiguous. Other less frequent labels are constantly misclassified into these frequent labels. For example, the amount of EXPAN-SION.MANNER samples is largely reduced by our method dealing with multi-label instances, and the classifier fails to recognize the minority class. Another type of senses achieving high scores are those occurring predominantly in intrasentential relations (CONTINGENCY.PURPOSE and CONTINGENCY.CONDITION) or in intersentential relations (EXPANSION.INSTANTIATION and EXPANSION.LEVEL-OF-DETAIL). The model recognize these senses with high Precision, but different levels of Recall, which could be due to a difference in the strength of evidence signalling the relation. Additionally, TEMPO-RAL.ASYNCHRONOUS sense that associates with much higher proportion in linked relations than stand-alone ones obtain similar Recall and Precision scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 Discussion",
"sec_num": null
},
{
"text": "Some previous approaches to discourse parsing have also distinguished relations that occur within a sentence from those that occur across sentences(Joty et al., 2013(Joty et al., , 2015, but it was not felt to be needed in the PDTB-2, where implicit relations only appeared across sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Differences in the distribution of sense relationsTo argue for separating the recognition of intrasentential implicits from inter-sentential implicits, and the recognition of linked implicits from standalone implicits, we show how their sense distributions are different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "combining results of the inter-sentential and intrasentential classifiers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These labels are not used in the basic model described in this work, but serve for statistical tests and further experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The default size of the dense layer is hidden size//5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their valuable comments. We would also like to thank Annie Louis for her contributions to the work on recognizing the presence of sentence-internal implicit discourse relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Here we describe the basic model architecture for implicit relation sense classification in PDTB-3. The configuration for the model is chosen based on consideration of data size and the interaction between lexical information and structural information. A further analysis on the predictive performance of the basic model on each labels is provided as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Specifics of the Basic Model",
"sec_num": null
},
{
"text": "Figure 1 (repeated here as Figure 3 ) illustrates the overall model architecture of the neural implicit sense classifier that consists of two LSTM and maxpooling layers, a hidden layer, a dense layer, and a softmax layer. The input for the model is the discourse argument pairs with additional labels 6 , and the output is a probability distribution of the senses between the discourse argument spans.Word vectors: In our model, arguments Arg1 and Arg2 are viewed as two sequences of word vectors with length of n 1 and n 2 . Word vectors for the word in arguments are taken from word embeddings.Arg1Argument representations: The two sequences of word vectors are encoded by LSTM respectively. The hidden states H Arg1 and H Arg2 of LSTM are taken. The max-pooling function is employed to compose meaning in the hidden states and reduce parameters for the model, as it has been proven effective in (Conneau et al., 2017) . As shown in eq. 6, it will select the maximum value along the sequence at each dimension of the hidden states. a 1 j (a 2 j ) represents a maximum value from all the values in a sequence with length of n 1 (n 2 ) at dimension j of the hidden states H Arg1 (H Arg2 ). By concatenating the output of max-pooling function, we have abstract representations A Arg1 and A Arg2 of arguments Arg1 and Arg2 individually. Table 7 : Precision, Recall and F 1 scores of different labels predicted by the basic model using main evaluation metric and their proportions in test data",
"cite_spans": [
{
"start": 898,
"end": 920,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 27,
"end": 35,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1335,
"end": 1342,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.1 Model architecture",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Deep enhanced representation for implicit discourse relation recognition",
"authors": [
{
"first": "Hongxiao",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "571--583",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongxiao Bai and Hai Zhao. 2018. Deep enhanced representation for implicit discourse relation recog- nition. In Proceedings of the 27th International Con- ference on Computational Linguistics, pages 571- 583, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "NLTK: The natural language toolkit",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL Interactive Poster and Demonstration Sessions",
"volume": "",
"issue": "",
"pages": "214--217",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird and Edward Loper. 2004. NLTK: The nat- ural language toolkit. In Proceedings of the ACL In- teractive Poster and Demonstration Sessions, pages 214-217, Barcelona, Spain. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00051"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "670--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving implicit discourse relation classification by modeling inter-dependencies of discourse units in a paragraph",
"authors": [
{
"first": "Zeyu",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "141--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeyu Dai and Ruihong Huang. 2018. Improving im- plicit discourse relation classification by modeling inter-dependencies of discourse units in a paragraph. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long Papers), pages 141-151, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Combining intra-and multisentential rhetorical parsing for document-level discourse analysis",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "486--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shafiq Joty, Giuseppe Carenini, Raymond Ng, and Yashar Mehdad. 2013. Combining intra-and multi- sentential rhetorical parsing for document-level dis- course analysis. In Proceedings of the 51st Annual Meeting of the Association for Computational Lin- guistics, pages 486-496, Sofia, Bulgaria.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "CODRA: A novel discriminative framework for rhetorical analysis",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "3",
"pages": "385--435",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00226"
]
},
"num": null,
"urls": [],
"raw_text": "Shafiq Joty, Giuseppe Carenini, and Raymond T. Ng. 2015. CODRA: A novel discriminative framework for rhetorical analysis. Computational Linguistics, 41(3):385-435.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Constituency parsing with a self-attentive encoder",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Kitaev",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2676--2686",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1249"
]
},
"num": null,
"urls": [],
"raw_text": "Nikita Kitaev and Dan Klein. 2018. Constituency pars- ing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676-2686, Melbourne, Australia. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The stanford corenlp natural language processing toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Bauer",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguis- tics: system demonstrations, pages 55-60.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Workshop at ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, G.s Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. Proceedings of Workshop at ICLR, 2013.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013b. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems -Volume 2, NIPS'13, page 3111-3119, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving implicit discourse relation recognition through feature set optimization",
"authors": [
{
"first": "Joonsuk",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "108--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joonsuk Park and Claire Cardie. 2012. Improving im- plicit discourse relation recognition through feature set optimization. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 108-112, Seoul, South Korea. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The penn discourse treebank 2.0",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Livio",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Sixth International Language Resources and Evaluation (LREC'08)",
"volume": "",
"issue": "",
"pages": "2961--2968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In Proceedings of the Sixth International Language Resources and Evaluation (LREC'08), pages 2961- 2968. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Realization of discourse relations by other means: Alternative lexicalizations",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Aravind Joshi, and Bonnie Webber. 2010. Realization of discourse relations by other means: Alternative lexicalizations. In Proceedings of the 23rd International Conference on Computa- tional Linguistics (COLING), Beijing, China.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Exploring substitutability through discourse adverbials and multiple judgments",
"authors": [
{
"first": "Hannah",
"middle": [],
"last": "Rohde",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Dickinson",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings, 12th International Conference on Computational Semantics (IWCS 2017)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hannah Rohde, Anna Dickinson, Nathan Schneider, Christopher Clark, Annie Louis, and Bonnie Webber. 2017. Exploring substitutability through discourse adverbials and multiple judgments. In Proceedings, 12th International Conference on Computational Se- mantics (IWCS 2017), Montpellier, France.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Discourse coherence: Concurrent explicit and implicit relations",
"authors": [
{
"first": "Hannah",
"middle": [],
"last": "Rohde",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56 th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hannah Rohde, Alexander Johnson, Nathan Schneider, and Bonnie Webber. 2018. Discourse coherence: Concurrent explicit and implicit relations. In Pro- ceedings of the 56 th Annual Meeting of the ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A systematic study of neural discourse models for implicit discourse relation",
"authors": [
{
"first": "Attapol",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "281--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Attapol Rutherford, Vera Demberg, and Nianwen Xue. 2017. A systematic study of neural discourse mod- els for implicit discourse relation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017), pages 281-291.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Robust non-explicit neural discourse parser in English and Chinese",
"authors": [
{
"first": "Attapol",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the CoNLL-16 shared task",
"volume": "",
"issue": "",
"pages": "55--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Attapol Rutherford and Nianwen Xue. 2016. Robust non-explicit neural discourse parser in English and Chinese. In Proceedings of the CoNLL-16 shared task, pages 55-59, Berlin, Germany.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Examples and specifications that prove a point: Identifying elaborative and argumentative discourse relations",
"authors": [
{
"first": "Merel",
"middle": [],
"last": "Scholman",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Dialogue & Discourse",
"volume": "8",
"issue": "",
"pages": "56--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Merel Scholman and Vera Demberg. 2017. Exam- ples and specifications that prove a point: Identi- fying elaborative and argumentative discourse rela- tions. Dialogue & Discourse, 8:56-83.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "On the need of cross validation for discourse relation classification",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "150--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Shi and Vera Demberg. 2017. On the need of cross validation for discourse relation classification. In Proceedings of the 15th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 150-156, Valencia, Spain. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The CoNLL-2015 shared task on shallow discourse parsing",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Rashmi",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Attapol",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rutherford",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Rashmi Prasad, Christopher Bryant, and Attapol Rutherford. 2015. The CoNLL-2015 shared task on shallow discourse parsing. In Proceedings of the Nine- teenth Conference on Computational Natural Lan- guage Learning -Shared Task, pages 1-16, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "CoNLL 2016 shared task on multilingual shallow discourse parsing",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Attapol",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "Chuan",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Hongmin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the CoNLL-16 shared task",
"volume": "",
"issue": "",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, At- tapol Rutherford, Bonnie Webber, Chuan Wang, and Hongmin Wang. 2016. CoNLL 2016 shared task on multilingual shallow discourse parsing. In Pro- ceedings of the CoNLL-16 shared task, pages 1- 19, Berlin, Germany. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": ". . . Exxon Corp. built the plant but (Implicit=then) closed it in 1985. [wsj 1748] (COMPARISON.CONCESSION.ARG2-AS-DENIER, TEMPORAL.ASYNCHRONOUS.PRECEDENCE) (6) . . . which [i.e., the line item veto] would enable him to kill individual items in a big spending bill without (Implicit=however) having to kill the entire bill. [wsj 1133] (EXPANSION.MANNER.ARG2-AS-MANNER, COMPARISON.CONCESSION.ARG2-AS-DENIER)"
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The overall model architecture for implicit sense classification"
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Confusion matrix of the Basic Model, Model 1 and Model 2"
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "MARKET MOVES, these managers don't. ( ( S-HLN ( S ( NP-SBJ ( NN MARKET ) ) ( VP ( VBZ MOVES ) ) ) ( , , ) ( S ( NP-SBJ ( DT these ) ( NNS managers ) ) ( VP ( VBP do ) ( RB n't ) ( VP ( -NONE-*?* ) ) ) ) ( . . ) ) ) [wsj 1825] (10) Oil-tool prices are even edging up. ( ( S ( NP-SBJ ( NN Oil-tool ) ( NNS prices ) ) ( VP ( VBP are ) ( ADVP ( RB even ) ) ( VP ( VBG edging ) ( ADVP-DIR ( RP up ) ) ) ) ( . . ) ) ) [wsj 0725]"
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "With three minutes left on the clock, Mr. Aikman takes the snap, steps back and fires a 21-yard pass -straight into the hands of an Atlanta defensive back. IN CD NNS VBD IN DT NN , NNP NNP VBZ DT NN , NNS RB CC VBZ DT JJ NN : RB IN DT NNS IN DT NNP NN RB . ((S (SBAR (IN With) (S (NP (CD three) (NNS minutes)) (VP (VBD left) (PP (IN on) (NP (DT the) (NN clock)))))) (, ,) (NP (NNP Mr.) (NNP Aikman)) (VP (VP (VBZ takes) (NP (NP (DT the) (NN snap)) (, ,) (NP (NNS steps))) (ADVP (RB back))) (CC and) (VP (VBZ fires) (NP (DT a) (JJ 21-yard) (NN pass)) (: -) (PP (RB straight) (IN into) (NP (NP (DT the) (NNS hands)) (PP (IN of) (NP (DT an) (NNP Atlanta) (NN defensive)))))) (ADVP (RB back))) (. .))) [wsj 1411]"
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The overall model architecture for implicit sense classification"
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td/><td/><td colspan=\"3\">inter-sentential intra-sentential</td></tr><tr><td colspan=\"2\">Comparison Concession</td><td colspan=\"3\">1355 (8.70%) 136 (2.19%)</td></tr><tr><td/><td colspan=\"2\">Concession+SpeechAct 7</td><td>(0.04%) 3</td><td>(0.05%)</td></tr><tr><td/><td>Contrast</td><td colspan=\"3\">700 (4.50%) 156 (2.51%)</td></tr><tr><td/><td>Similarity</td><td>14</td><td>(0.09%) 14</td><td>(0.23%)</td></tr><tr><td colspan=\"2\">Contingency Cause</td><td colspan=\"3\">4153 (26.67%) 1613 (25.97%)</td></tr><tr><td/><td>Cause+SpeechAct</td><td>21</td><td>(0.13%) 1</td><td>(0.02%)</td></tr><tr><td/><td>Cause+Belief</td><td colspan=\"2\">105 (0.67%) 94</td><td>(1.51%)</td></tr><tr><td/><td>Condition</td><td>1</td><td colspan=\"2\">(0.01%) 198 (3.19%)</td></tr><tr><td/><td>Condition+SpeechAct</td><td>1</td><td>(0.01%) 1</td><td>(0.02%)</td></tr><tr><td/><td>Purpose</td><td>19</td><td colspan=\"2\">(0.12%) 1351 (21.76%)</td></tr><tr><td>Expansion</td><td>Conjunction</td><td colspan=\"3\">3648 (23.43%) 733 (11.80%)</td></tr><tr><td/><td>Disjunction</td><td>9</td><td>(0.06%) 21</td><td>(0.34%)</td></tr><tr><td/><td>Equivalence</td><td colspan=\"2\">286 (1.84%) 48</td><td>(0.77%)</td></tr><tr><td/><td>Exception</td><td>4</td><td>(0.03%) 1</td><td>(0.02%)</td></tr><tr><td/><td>Instantiation</td><td colspan=\"2\">1385 (8.89%) 87</td><td>(1.40%)</td></tr><tr><td/><td>Level-of-detail</td><td colspan=\"3\">2644 (16.98%) 589 (9.48%)</td></tr><tr><td/><td>Manner</td><td>4</td><td colspan=\"2\">(0.03%) 223 (3.59%)</td></tr><tr><td/><td>Substitution</td><td colspan=\"3\">221 (1.42%) 145 (2.33%)</td></tr><tr><td>Temporal</td><td>Asynchronous</td><td colspan=\"3\">647 (4.15%) 608 (9.79%)</td></tr><tr><td/><td>Synchronous</td><td colspan=\"3\">348 (2.23%) 188 (3.03%)</td></tr><tr><td>total</td><td/><td>15572</td><td>6210</td></tr></table>",
"html": null,
"num": null,
"text": "Task 2: Identify the location of implicit relations. To reduce the dependency on the gold standard annotations of where implicit discourse re-"
},
"TABREF4": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Main evaluation metric</td><td/><td>Cross</td></tr><tr><td/><td colspan=\"4\">inter-sentential intra-sentential overall validation</td></tr><tr><td>Basic model</td><td>35.791</td><td>47.154</td><td>38.608</td><td>41.463</td></tr><tr><td>Model 1</td><td>34.973</td><td>56.666</td><td>40.222</td><td>43.418</td></tr><tr><td>Model 2</td><td>37.701</td><td>50.410</td><td>40.827</td><td>42.174</td></tr></table>",
"html": null,
"num": null,
"text": "Distribution of linked and stand-alone implicit relations among Level 2 labels and the proportion of each label with respect to the total linked/stand-alone implicit relations"
},
"TABREF5": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": ""
}
}
}
}