ACL-OCL / Base_JSON /prefixS /json /starsem /2021.starsem-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:40:24.943966Z"
},
"title": "Evaluating Universal Dependency Parser Recovery of Predicate Argument Structure via CompChain Analysis",
"authors": [
{
"first": "Sagar",
"middle": [],
"last": "Indurkhya",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Beracah",
"middle": [],
"last": "Yankama",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Robert",
"middle": [
"C"
],
"last": "Berwick",
"suffix": "",
"affiliation": {
"laboratory": "LIDS/IDSS, Dept. of EECS MIT Room",
"institution": "",
"location": {
"addrLine": "32D-728 32 Vassar St. Cambridge",
"postCode": "02139",
"region": "MA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Accurate recovery of predicate-argument structure from a Universal Dependency (UD) parse is central to downstream tasks such as extraction of semantic roles or event representations. This study introduces compchains, a categorization of the hierarchy of predicate dependency relations present within a UD parse. Accuracy of compchain classification serves as a proxy for measuring accurate recovery of predicate-argument structure from sentences with embedding. We analyzed the distribution of compchains in three UD English treebanks, EWT, GUM and LinES, revealing that these treebanks are sparse with respect to sentences with predicate-argument structure that includes predicate-argument embedding. We evaluated the CoNLL 2018 Shared Task UDPipe (v1.2) baseline (dependency parsing) models as compchain classifiers for the EWT, GUMS and LinES UD treebanks. Our results indicate that these three baseline models exhibit poorer performance on sentences with predicate-argument structure with more than one level of embedding; we used compchains to characterize the errors made by these parsers and present examples of erroneous parses produced by the parser that were identified using compchains. We also analyzed the distribution of compchains in 58 non-English UD treebanks and then used compchains to evaluate the CoNLL'18 Shared Task baseline model for each of these treebanks. Our analysis shows that performance with respect to compchain classification is only weakly correlated with the official evaluation metrics (LAS, MLAS and BLEX). We identify gaps in the distribution of compchains in several of the UD treebanks, thus providing a roadmap for how these treebanks may be supplemented. We conclude by discussing how compchains provide a new perspective on the sparsity of training data for UD parsers, as well as the accuracy of the resulting UD parses.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Accurate recovery of predicate-argument structure from a Universal Dependency (UD) parse is central to downstream tasks such as extraction of semantic roles or event representations. This study introduces compchains, a categorization of the hierarchy of predicate dependency relations present within a UD parse. Accuracy of compchain classification serves as a proxy for measuring accurate recovery of predicate-argument structure from sentences with embedding. We analyzed the distribution of compchains in three UD English treebanks, EWT, GUM and LinES, revealing that these treebanks are sparse with respect to sentences with predicate-argument structure that includes predicate-argument embedding. We evaluated the CoNLL 2018 Shared Task UDPipe (v1.2) baseline (dependency parsing) models as compchain classifiers for the EWT, GUMS and LinES UD treebanks. Our results indicate that these three baseline models exhibit poorer performance on sentences with predicate-argument structure with more than one level of embedding; we used compchains to characterize the errors made by these parsers and present examples of erroneous parses produced by the parser that were identified using compchains. We also analyzed the distribution of compchains in 58 non-English UD treebanks and then used compchains to evaluate the CoNLL'18 Shared Task baseline model for each of these treebanks. Our analysis shows that performance with respect to compchain classification is only weakly correlated with the official evaluation metrics (LAS, MLAS and BLEX). We identify gaps in the distribution of compchains in several of the UD treebanks, thus providing a roadmap for how these treebanks may be supplemented. We conclude by discussing how compchains provide a new perspective on the sparsity of training data for UD parsers, as well as the accuracy of the resulting UD parses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Universal Dependencies (UD) project Nivre et al., 2016 ) is a multilingual annotation scheme for dependency grammars that has gained wide usage (Zeman et al., 2017; Kong et al., 2017; Qi et al., 2020) . To this extent, automatically identifying whether a dependency parse 1 is correct or incorrect, as well as the potential source of such errors, becomes an important part of NLP pipelines. For example, such identification can prevent errors from propagating to downstream applications such as the identification of predicate-argument structure, involved in semantic role labeling and sentiment analysis. 2 Furthermore, embedding of sentences within sentences, and in particular embedding of predicate-argument structures within one another, is one of the ways in which humans have the capability to generate an infinity of different <sentences, meaning> pairings, and so it is important to evaluate whether a UD parser can accurately recover the predicate-argument structure of sentences with embedding. Thus, characterizing the limits of how accurately and consistently UD parsers assign predicate-argument structure in the context of correct UD annotation also becomes important (Nivre and Fang, 2017; Oepen et al., 2017; Fares et al., 2018; White et al., 2016; Mille et al., 2018) . That is the goal of this study.",
"cite_spans": [
{
"start": 40,
"end": 58,
"text": "Nivre et al., 2016",
"ref_id": "BIBREF21"
},
{
"start": 148,
"end": 168,
"text": "(Zeman et al., 2017;",
"ref_id": "BIBREF36"
},
{
"start": 169,
"end": 187,
"text": "Kong et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 188,
"end": 204,
"text": "Qi et al., 2020)",
"ref_id": "BIBREF26"
},
{
"start": 1187,
"end": 1209,
"text": "(Nivre and Fang, 2017;",
"ref_id": "BIBREF22"
},
{
"start": 1210,
"end": 1229,
"text": "Oepen et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 1230,
"end": 1249,
"text": "Fares et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 1250,
"end": 1269,
"text": "White et al., 2016;",
"ref_id": "BIBREF33"
},
{
"start": 1270,
"end": 1289,
"text": "Mille et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study we introduce compchains, a categorization of the hierarchy of predicate dependency relations present within a Universal Dependency (UD) parse; this categorization serves as a proxy for predicate-argument structure. We use compchains to evaluate the accuracy of three (English) CoNLL 2018 Shared Task baseline models for the UDPipe dependency parser (Zeman et al., 2018) . We found that the baseline model for the EWT UD treebank was more accurate than the baseline models for the LinES and GUM UD treebanks. We then use compchains to characterize the errors (relevant to predicate-argument structure) made by these models. We found that the accuracy of all three models dropped significantly when restricting the test set to samples with predicate-argument structure with embedding. Finally, we extended the analysis above to languages other than English, computing the distribution of compchains in 58 UD treebanks and evaluating the performance of the corresponding CoNLL 2018 Shared Task baseline models (for the UDPipe parser) as compchain classifiers. We conclude by discussing deficiencies in the distribution of predicate-argument structure with embedding present in the UD treebanks, as identified by our analysis.",
"cite_spans": [
{
"start": 363,
"end": 383,
"text": "(Zeman et al., 2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section reviews prior work on the evaluation of (Universal) dependency parsers and the characterization of the errors these parsers make. The CoNLL Shared Task is a well established benchmark for evaluating the performance of multilingual (Universal) dependency parsers (Buchholz and Marsi, 2006; Zeman et al., 2017 Zeman et al., , 2018 . The task uses a number of metrics to evaluate the accuracy of the parser including: UAS (unlabeled attached score), LAS (labeled attachment score), CLAS (Content-word LAS) (Nivre and Fang, 2017) , MLAS (Morphologically-aware LAS) and BLEX (BiLEXical Dependency Score). However, these metrics rely on the attachment accuracy (of dependency relations) 3 and do not take into account that errors cascade -i.e. if the parser incorrectly attaches a dependency relation, it may then be forced to make yet another incorrect attachment (Ng and Curran, 2015) , thus making it difficult to identify the provenance of the error.",
"cite_spans": [
{
"start": 275,
"end": 301,
"text": "(Buchholz and Marsi, 2006;",
"ref_id": "BIBREF2"
},
{
"start": 302,
"end": 320,
"text": "Zeman et al., 2017",
"ref_id": "BIBREF36"
},
{
"start": 321,
"end": 341,
"text": "Zeman et al., , 2018",
"ref_id": "BIBREF35"
},
{
"start": 516,
"end": 538,
"text": "(Nivre and Fang, 2017)",
"ref_id": "BIBREF22"
},
{
"start": 872,
"end": 893,
"text": "(Ng and Curran, 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In light of this, efforts to further characterize the errors have proceeded in several directions. One direction involves studying whether and how the parsing errors are a result of the design of the dependency parser: (McDonald and Nivre, 2007) characterizes and compares the errors produced by graph-based dependency parsers (e.g. the MST-Parser by (McDonald and Pereira, 2006) ; see also (Kiperwasser and Goldberg, 2016; ) and transition-based dependency parsers (e.g. the MaltParser by (Nivre et al., 2006) ); (Zhang and Clark, 2008) shows how the two approaches to dependency parsing may be combined and documents the resulting improvement in performance.",
"cite_spans": [
{
"start": 219,
"end": 245,
"text": "(McDonald and Nivre, 2007)",
"ref_id": "BIBREF14"
},
{
"start": 351,
"end": 379,
"text": "(McDonald and Pereira, 2006)",
"ref_id": "BIBREF15"
},
{
"start": 391,
"end": 423,
"text": "(Kiperwasser and Goldberg, 2016;",
"ref_id": "BIBREF9"
},
{
"start": 490,
"end": 510,
"text": "(Nivre et al., 2006)",
"ref_id": "BIBREF24"
},
{
"start": 514,
"end": 537,
"text": "(Zhang and Clark, 2008)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "An alternative direction involves characterizing the errors in the context of linguistic theory -e.g. (Kummerfeld et al., 2012) has introduced a method for classifying erroneous parse trees by repairing the tree with a series of tree-transformations, with each tree-transformation having a linguistic interpretation; (Mahler et al., 2017) has shown that it is possible to systematically break NLP systems for sentiment analysis by editing sentences with linguistically interpretable transformations. In this study we pursue this latter direction, opting to characterize erroneous parse trees by classifying their predicate-argument structure using compchains.",
"cite_spans": [
{
"start": 102,
"end": 127,
"text": "(Kummerfeld et al., 2012)",
"ref_id": "BIBREF11"
},
{
"start": 317,
"end": 338,
"text": "(Mahler et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Within a UD parse tree, predicate-argument structure 4 is encoded by core argument dependency relations, along with the special dependency relation root. 5 The core-argument dependency relations fall into two classes: predicate relations and nominal relations. In this study, we limit our attention to the two predicate dependency relations that encode embedding of clausal complements: (i) ccomp -a dependent, clausal complement, and (ii) xcomp -a clausal complement lacking a subject; the subject is determined by an argument that is external to the xcomp, usually the object (or otherwise subject) of the next higher clause. 6 We will focus on categorizing sequences of these two dependency relations (with POS marked as Verb) that originate from the root of a dependency tree (intuitively, the spine of the predicate-argument structure). This notion is formalized as follows: Definition. A compchain is a finite sequence of dependency relations that traces a path starting at Figure 1 : Examples of compchain classifications (left) for eight UD parses (right) produced by the UDv2.2 EWT baseline model using UDPipe 1.2. In each parse, the node with no incoming dependency relations is the root. Sentence 8 is classified as the / 0 compchain because the root is not marked as VERB.",
"cite_spans": [
{
"start": 628,
"end": 629,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 980,
"end": 988,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Compchains",
"sec_num": "3"
},
{
"text": "the root node of a dependency parse tree and passing through only xcomp and ccomp dependency relations, subject to the constraints that: (i) every node in a compchain must have the POS tag of Verb; (ii) no node in a compchain should have a child dependency relation with POS Verb that is either an xcomp or ccomp and is not in the compchain as well. 7 We denote a compchain by listing the sequence of dependency relations, starting from the root of the tree, using the notation: R = root; X = xcomp; C = ccomp. E.g. we would denote the compchain [root \u2192 xcomp \u2192 ccomp] as RXC. See Figure-1 for examples of UD parses and their compchain classifications. One way to evaluate (indirectly) how well a UD parser can identify predicate-argument structure for sentences in a UD treebank is to evaluate whether the UD parse assigned by the parser to a sentence in the treebank has the same compchain as the compchain associated with the gold 7 This constraint serves to ensure that if a UD parse tree has a compchain, it is unique and may be derived deterministically. This constraint also implies that some valid UD parse trees do not have a compchain -e.g. a parse in which there are two xcomp dependency relations that are both children of the same node. We use the symbol / 0 to denote that a UD parse tree has no compchain.",
"cite_spans": [
{
"start": 350,
"end": 351,
"text": "7",
"ref_id": null
},
{
"start": 934,
"end": 935,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 581,
"end": 589,
"text": "Figure-1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Compchains",
"sec_num": "3"
},
{
"text": "UD parse listed for that sentence (in the treebank); we refer to this task as compchain classification. Performance on the compchain classification task is a proxy for performance on the task of classifying predicate-argument structure that includes predicate-argument embedding. If a UD parser performs poorly on the compchain classification task, predicate-argument structure cannot be reliably recovered from an (output) UD parse tree via top-down traversal of the sequence of dependency relations that forms the associated compchain. See Figure- 2 for examples of incorrect compchain classifications that reflect the parser recovering incorrect predicate-argument structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 542,
"end": 549,
"text": "Figure-",
"ref_id": null
}
],
"eq_spans": [],
"section": "Compchains",
"sec_num": "3"
},
{
"text": "We evaluated the performance of the CoNLL'18 shared task baseline (parsing) models for English as compchain classifiers using three UD (v2.2) English treebanks: the English Web Treebank (EWT), with a total of 16,622 sentences Schuster and Manning, 2016) ; the English side of the English-Swedish Parallel Treebank (LinES), with a total of 4,564 sentences (Ahren- (1) and(2) are for the sentence \"How come no one bothers to ask any questions in this section?\" The parses in (3) and (4) are for the sentence \"Even the least discriminating diner would know not to eat at Sprecher's.\" Both sentences were taken from the UDv2.2 English Web Treebank. 1and 3are gold parse from the treebank whereas (2) and 4are produced by UDPipe using the CoNLL'18 baseline language model for UDv2.2 EWT. Both 2and 4are incorrectly classified, reflecting that these two parses encode misinterpretations (compared to the interpretations in their respective gold parses -i.e. 1and 3). berg, 2007); and the GUM treebank, with a total of 4,390 sentences (Zeldes, 2017) . 8 We began by computing the distribution of compchains in each of the sections (train, dev, test) for each of the treebanks (see Table- 1). We observed that although the training section of the EWT (UD) treebank includes a non-negligible number of UD parse trees that are classified (according to their corresponding Gold UD parse) as compchains with three or more dependency relations, the test section of the EWT (UD) treebank does not. This suggests that performing well on the task of parsing the test section of the EWT (UD) treebank need not indicate competency in parsing sentences with predicate-argument embedding of degree two or more. We also observed that the LinES and GUM treebanks have a negligible number of parse trees (across all sections) that are classified as compchains with three or more dependency relationsi.e. RCC, RCX, RXC and RXX.",
"cite_spans": [
{
"start": 226,
"end": 253,
"text": "Schuster and Manning, 2016)",
"ref_id": "BIBREF28"
},
{
"start": 1028,
"end": 1042,
"text": "(Zeldes, 2017)",
"ref_id": "BIBREF34"
},
{
"start": 1045,
"end": 1046,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1174,
"end": 1180,
"text": "Table-",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation of English UD Treebanks",
"sec_num": "4.1"
},
{
"text": "Next, we evaluated the CoNLL'18 shared task baseline (parsing) models 9 for the three treebanks as compchain classifiers. We used UDPipe (v1.2), a transition-based non-projective dependency parser, to parse the test section of each of the three tree- 8 We used the pretrained word embeddings supplied with the CoNLL Shared Task for each of the three treebanks; these embeddings were produced with word2vec (Mikolov et al., 2013b,a).",
"cite_spans": [
{
"start": 251,
"end": 252,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of English UD Treebanks",
"sec_num": "4.1"
},
{
"text": "9 These UDPipe models were trained on the training section of the UDv2.2 EWT/LinES/GUM respectively. We also used the tagging and tokenization pipeline provided by UDPipe. banks using the corresponding baseline model (Straka and Strakov\u00e1, 2017) . We then classified the compchain of each UD parse and compared it to the compchain associated with the corresponding gold parse. We report the F-measures for this classification task in Table 2 . We observed that the baseline model for EWT had the best performance as a compchain classifier. We also computed the per-compchain F-measures and observed that for all three baseline models, their per-compchain F1score for RX was notably better than for RC. Here we observed a steep falloff in per-compchain F1score as the number of dependency relations in a compchain increases. This suggests that either the parsers were not trained on enough examples of sentences with predicate-argument embedding, or that they did not adequately generalize from the limited number of examples that they were trained on.",
"cite_spans": [
{
"start": 217,
"end": 244,
"text": "(Straka and Strakov\u00e1, 2017)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 433,
"end": 440,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation of English UD Treebanks",
"sec_num": "4.1"
},
{
"text": "Finally, we computed and analyzed the confusion matrix (i.e. error matrix) for each of the three baseline models, evaluating each model on the test section of its associated treebank. (see Figure 3 ) In each confusion matrix, off-diagonal entries count instances of parses with erroneous predicate-argument structure as indicated by the predicted compchain differing from the actual compchain (if two parse trees have different compchains, then their predicate-argument structure must differ as well). On-diagonal entries count instances of parses with correctly classified com- / 0 5230 985 1065 591 191 224 879 201 268 R 5500 815 806 1767 608 580 1661 413 419 RC 758 79 79 135 43 43 171 43 33 RX 808 100 104 202 65 50 158 43 41 RCC 47 4 6 1 0 2 8 1 2 RCX 94 7 9 17 1 6 10 2 1 RXC 48 6 3 10 2 6 6 0 2 RXX 39 2 2 12 2 3 13 3 Table 2 : F-measures for the compchain classification of the parse trees in the EWT, LinES and GUM (UD) treebanks. The left most column refers to the true compchain from the appropriate UD treebank. Each row has the F1-score for the evaluation of the parser (as a compchain classifier) on sentences in the treebank that had the listed compchain, except for the bottom most row, which is the total (weighted) F1-score over all compchains -i.e. performance as a multi-way classifier. pchains, which indicates that the parse may be correct (though it may well have errors not related to predicate-argument structure). We observed, for all three models, that compchains of length two or less were very rarely misclassified as compchains of length three or more, and that compchains of length two were often misclassified as the R compchain (see Figure-2 for an example of such a misclassification). We also observed that in the case of the baseline model for LinES, the compchain for RC is frequently misclassified as RX, but the compchain RX is rarely misclassified as RC; this asymmetry may reflect the difference in number of training examples in the LinES treebank -135 in the case of RC and 202 in the case of RX (see Table- 1).",
"cite_spans": [],
"ref_spans": [
{
"start": 189,
"end": 197,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 579,
"end": 892,
"text": "/ 0 5230 985 1065 591 191 224 879 201 268 R 5500 815 806 1767 608 580 1661 413 419 RC 758 79 79 135 43 43 171 43 33 RX 808 100 104 202 65 50 158 43 41 RCC 47 4 6 1 0 2 8 1 2 RCX 94 7 9 17 1 6 10 2 1 RXC 48 6 3 10 2 6 6 0 2 RXX 39 2 2 12 2 3 13 3",
"ref_id": "TABREF2"
},
{
"start": 893,
"end": 900,
"text": "Table 2",
"ref_id": null
},
{
"start": 1734,
"end": 1742,
"text": "Figure-2",
"ref_id": "FIGREF0"
},
{
"start": 2112,
"end": 2118,
"text": "Table-",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation of English UD Treebanks",
"sec_num": "4.1"
},
{
"text": "We also used the compchain classification task to evaluate the CoNLL'18 shared task baseline models (and the respective UD treebanks they were trained on) for languages other than English; this was motivated by the observation that since the UD treebanks are derived from a variety of textual sources, and thus have varying compchain distributions, we can use them collectively to evaluate and characterize the performance of the UDPipe dependency parser under various training conditions. Figure 4 presents the distribution of compchains across 61 UD treebanks (including the three English treebanks analyzed earlier in this study). 10 Our analysis reveals that: (i) the UD treebanks for Hindi and Urdu have no instances of the compchain RC in either the training or test sections; (ii) the UD treebanks for Japanese, Korean, Turkish and Uyghur have no instances of the compchain RC in either the training or test sections; (iii) the UD treebanks for Hindi, Japanese, Turkish and Uyghur do not include any instances of compchains of length three or more (i.e. RXX, RCC, RXC, or RCX) in either the training or test sections.",
"cite_spans": [
{
"start": 634,
"end": 636,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 490,
"end": 498,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Multilingual Evaluation of UD Treebanks",
"sec_num": "4.2"
},
{
"text": "We computed the F1-scores for the performance of each baseline model on the compchain classification task. 11 The F1-score for length-1 compchains is very weakly correlated with the F1-score for length-2 compchains, with R 2 = 0.265 (see Figure 5) , and F1-scores for the two length-2 compchains (RC and RX) are also very weakly correlated, with R 2 = 0.177 (see Figure 6 ). This suggests that performance in recovering predicate-argument structures with differing embedding structures is largely unrelated and should be measured explicitly, just as the compchain classification task does. Additionally, we observe (as we did with the models trained on English treebanks) a rapid decline in the per-class F1-score as the length of the compchain increases, in particular for compchains of length two or more. (See Figure 7 ) This is revealing because, although the lack of compchains of length two or more in the UD treebanks suggests that we should not necessarily expect a dependency parser trained on the treebank to generalize out of the training domain, there is empirical evidence that humans do have the capacity to acquire a grammar from sentences with at most degree-1 embedding (corresponding to compchains of length 2) and then later correctly parse sentences with a degree of embedding of two or more (Wexler and Culicover, 1980; Morgan, 1986; Lightfoot, 1989) ; thus, the poor performance on compchains of length three or more suggests that the CoNLL 2018 Shared Task baseline models are not able to generalize beyond the distribution of syntactic structures they were trained upon, in contrast to human learners.",
"cite_spans": [
{
"start": 1312,
"end": 1340,
"text": "(Wexler and Culicover, 1980;",
"ref_id": "BIBREF32"
},
{
"start": 1341,
"end": 1354,
"text": "Morgan, 1986;",
"ref_id": "BIBREF19"
},
{
"start": 1355,
"end": 1371,
"text": "Lightfoot, 1989)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 238,
"end": 247,
"text": "Figure 5)",
"ref_id": "FIGREF3"
},
{
"start": 363,
"end": 371,
"text": "Figure 6",
"ref_id": "FIGREF4"
},
{
"start": 813,
"end": 821,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multilingual Evaluation of UD Treebanks",
"sec_num": "4.2"
},
{
"text": "Word ordering data (i.e. head-directionality) for each of the 61 languages in the UD treebanks was obtained from the WALS Online database (Dryer, 2013) ; we retrieved this information because the word ordering dictates whether a predicate precedes or succeeds its complement with respect to the linear ordering of the words in a sentence, and we wanted to understand whether this had an impact on the parser's performance on the compchain classification task. (See Table-5 in the appendix for the word-order of each language) The 47 languages with verb-object (VO) ordering had a median and mean weighted average F1-score of 0.85 and 0.88 respectively; the 18 languages with object-verb (OV) ordering had a median and mean weighted average F1-score of 0.86 and 0.85 respectively. It thus appears that the word-ordering does not appear to impact the weighted average F1-score. The F1-scores associated with compchains of length 2 (i.e. RX and RC) tell a different story: in the case of the RC compchain, the median F1-scores for verb-object and object-verb were 0.68 and 0.55 respectively, and in the case of the RX compchain, the median F1-scores for verb-object and objectverb were 0.72 and 0.42 respectively; thus for both compchains of length 2, models trained on verbobject ordered languages performed significantly better than models trained on object-verb ordered languages. 12 Given that the orderings of verbobject (i.e. head-initial) and object-verb (i.e. headfinal) control whether a language will be associated with right-branching or left-branching structures respectively, our results suggest that the UDPipe parser has difficulty dealing with left-branching structures.",
"cite_spans": [
{
"start": 138,
"end": 151,
"text": "(Dryer, 2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Word Ordering",
"sec_num": "4.2.1"
},
{
"text": "We carried out a regression analysis to investigate the relationship between the correctness of compchain classification and sentence length; this was Figure 7 : Distributions of F1-scores for length-3 compchains over all UD languages. For each length-3 compchain, F1-scores were reported for languages that had that compchain present in the test-treebank. motivated by the observation that sentences with higher degrees of embedding, and thus longer compchains, tend to be longer sentences. We fitted a logistic function for each sentence in the test treebank, with the log of the sentence length (i.e. the number of tokens including punctuation) serving as the independent variable, and the (binary) dependent variable being whether the compchain associated with that sentence was correctly classified. We interpreted a good-fitting logistic function to indicate that compchain accuracy is dependent on sentence length. To evaluate the fit of the logistic function, we computed the Area Under Curve (AUC) measure of the Receiver Operator Characteristic (ROC) curve for the fitted logistic function. Figure 8 presents the distribution of AUCs for the test corpus of each of: (a) the 43 UD treebanks for languages with verb-object (VO) word-ordering, and (b) the 18 UD treebanks for langauges with object-verb (OV) word-ordering. We observe that the AUC for the majority of the treebanks falls between 0.55 and 0.65, and virtually none of the AUCs surpass 0.7, which is generally considered a minimum threshold for a binary-classifier to be considered accurate. Additionally, we observe that the OV languages tend to have a slightly higher AUC than the VO languages. We conclude that accuracy of compchain classification is weakly correlated with the log of the length of the sentence, and that this correlation is slightly higher for OV languages than for the VO languages. (Similar results were obtained when the analysis was carried out directly on the length of the sentence.)",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 159,
"text": "Figure 7",
"ref_id": null
},
{
"start": 1101,
"end": 1109,
"text": "Figure 8",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Impact of Sentence Length",
"sec_num": "4.2.2"
},
{
"text": "In order to understand whether the compchain metric is simply a proxy for one of the three official evaluation metrics (LAS, BLEX and MLAS), we computed the pairwise linear correlation between each of the metrics for each of the 61 UD treebanks. 13 Table 3 presents the coefficient of determination for each pairing of the metrics. We observe that although LAS, MLAS and BLEX are all highly correlated with one another, they are weakly correlated with the compchain-metrics (i.e. weighted avg. of F1-score over all compchains and per-compchain F1-scores); notably, performance on compchain classification for RX is very weakly correlated with LAS, MLAS and BLEX (R 2 < 0.1).",
"cite_spans": [
{
"start": 246,
"end": 248,
"text": "13",
"ref_id": null
}
],
"ref_spans": [
{
"start": 249,
"end": 256,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Comparison with Other Eval. Metrics",
"sec_num": "4.2.3"
},
{
"text": "13 LAS, MLAS and BLEX scores for CoNLL Shared Task baseline models were obtained from https://universaldependencies.org/ conll18/baseline.html#baseline-results. This suggests that the compchain metric is measuring an aspect of the parser's performance that is not brought to the fore by any of the three official evaluation metrics, and that a baseline model having a good LAS, MLAS or BLEX score does not necessarily indicate that the model will correctly predict the embedding structure of a sentence with even a single level of embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Other Eval. Metrics",
"sec_num": "4.2.3"
},
{
"text": "In this study, we defined compchains and used them to evaluate how accurately a UD parser can parse sentences with predicate-argument structure that contains embedded clauses. We also used compchains to classify the errors, relevant to predicateargument structure with embedding, made by a UD parser. Overall model performance on the compchain classification task (as measured by weighted F-measure) was found to be dominated by parse trees in the training set with no embedding (compchain R); closer inspection of per-compchain performance revealed that parser accuracy dropped precipitously as the degree of embedding in the predicate argument structure (i.e. length of compchain) increased. Finally, our results indicate that UD treebanks have very few parse trees with degree of embedding (i.e. length of compchain) greater than two. This presents an opportunity: if the test sets of the UD treebanks were augmented with parses with predicate-argument structure with degree of embeddings greater than two, then UD parsers can be evaluated in terms of their capacity to generalize from constructions (in the training set) with (mostly) low degree of embedding, just as a child must in some models of first language acquisition (Wexler and Culicover, 1980; Berwick, 1985; Lightfoot, 1989 ).",
"cite_spans": [
{
"start": 1230,
"end": 1258,
"text": "(Wexler and Culicover, 1980;",
"ref_id": "BIBREF32"
},
{
"start": 1259,
"end": 1273,
"text": "Berwick, 1985;",
"ref_id": "BIBREF1"
},
{
"start": 1274,
"end": 1289,
"text": "Lightfoot, 1989",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "on Empirical Methods in Natural Language Processing, pages 562-571. Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Table-4 presents the distribution of compchains across 61 UD treebanks (including the three English treebanks analyzed earlier in this study). Table- 5 presents the F1-scores for the performance of each baseline models on the compchain classification task. The rows of Table 4 and Table 5 were seriated using the Google OR-Tools library so that rows with similar values appear close together: Table 4 is seriated so that languages with similar compchain distributions are clustered together; Table 5 is seriated so that languages with similar F1-scores are clustered together.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 149,
"text": "Table-",
"ref_id": null
},
{
"start": 269,
"end": 288,
"text": "Table 4 and Table 5",
"ref_id": null
},
{
"start": 393,
"end": 400,
"text": "Table 4",
"ref_id": null
},
{
"start": 492,
"end": 499,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "Computing Infrastructure: All experiments reported in this study were performed on a MacBook Pro (Retina, 15-inch, Late 2013) with a 2.3 GHz Intel Core i7 processor, and 16 GB of 1600 MHz DDR3 RAM. We used Python v3.7.9, Pandas v1.2.1 and Matplotlib v3.2.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "(The remainder of this page intentionally blank. Please see the next page.) italian-isdt 13121/482 61.0/69.9 3.5/1.9 3.9/3.1 0.2/0.6 0.2/0.2 0.1/-0.1/dutch-alpino 12269/596 63.2/63.9 4.6/5.5 4.8/4.4 0.3/0.3 0.3/0.7 0.2/1.0 0.2/estonian-edt 20827/2737 60.4/59.8 3.4/4.2 5.1/4.6 0.1/0.1 0.4/0.4 0.2/0.1 0.1/0.1 finnish-tdt 12217/1555 60.1/58.7 3.0/4.1 5.1/5.8 0.0/0.1 0.3/0.3 0.2/0.3 0.1/0.1 ukrainian-iu 4513/783 62.6/65.1 3.1/4.1 5.9/5.1 0.0/-0.3/0.5 0.2/-0.1/russian-syntagrus 48814/6491 60.6/60.9 3.4/2.8 6.6/6.5 0.0/0.0 0.4/0.3 0.2/0.2 0.1/0.1 ancient_greek-perseus 11476/1306 83.7/68.3 4.1/12.6 7.0/7.6 0.2/0.5 0.4/1.1 0.1/0.2 0.2/0.4 latin-ittb 15808/750 52.0/51.5 5.0/4.0 6.8/9.2 0.2/-0.2/0.3 0.4/-0.1/czech-pdt 68495/10148 53.8/54.6 4.9/4.6 6.9/6.6 0.2/0.2 0.9/0.9 0.3/0.3 0.1/0.1 croatian-set 6983/1057 53.8/55.2 5.8/8.6 7.4/6.5 0.2/0.1 1.1/1.1 0.3/0.8 0.0/0.1 gothic-proiel 3387/1029 75.4/77.5 6.7/6.3 8.6/7.7 0.2/0.3 0.5/0.5 0.1/-0.1/0.2 old_church_slavonic-proiel 4123/1141 80.4/82.6 6.0/5.2 8.3/7.2 0.1/0.1 0.8/0.4 -/0.1 0.3/0.1 english-lines 2738/914 64.5/63.5 4.9/4.7 7.4/5.5 0.0/0.2 0.6/0.7 0.4/0.7 0.4/0.3 polish-sz 6100/1100 72.5/72.9 5.0/5.3 7.6/7.2 0.0/0.2 0.6/0.6 0.4/0.4 0.1/-old_french-srcmf 13909/1927 79.2/78.6 4.8/4.0 7.7/8.2 0.1/0.1 0.4/0.4 0.2/0.2 0.2/0.1 french-gsd 14554/416 61.0/54.6 3.1/3.8 8.2/8.7 0.2/0.7 0.5/1.0 0.2/-0.3/0.5 polish-lfg 13774/1727 80.9/79.7 2.8/2.7 8.4/8.9 0.0/0.1 0.3/0.6 0.2/0.2 0.1/0.1 czech-cac 23478/628 59.2/61.3 2.2/2.4 6.8/6.7 0.1/-0.4/0.3 0.3/0.2 0.1/0.2 french-spoken 1153/726 49.6/52.6 1.6/4.8 7.7/4.5 0.1/0.1 0.2/0.8 0.2/-0.1/0.1 indonesian-gsd 4477/557 62.0/63.9 2.2/2.9 9.0/7.7 0.1/0.2 0.3/0.2 0.1/-0.7/0.9 french-sequoia 2231/456 44.1/41.9 3.0/3.5 13.2/12.9 -/-1.0/1.1 1.0/0.7 0.6/0.4 Table 4 : Distribution of Compchains in UD 2.2 Gold Treebanks. The column Total presents the number of trees in the training and test sections of each treebank, and is formatted as Count Training /Count Test ; the columns for each compchain present the percent of trees with that compchain in the training and test sections of the treebank respectively -e.g. with respect to the English-EWT treebank, 6% of the 12543 trees in the training section have the compchain RC whereas only 3.8% of the 2077 trees in the test section have the compchain RC. A dash (\"-\") indicates an absence of trees with that compchain (i.e. 0%).",
"cite_spans": [],
"ref_spans": [
{
"start": 1750,
"end": 1757,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "This study only considers dependency parse trees annotated with UD. We refer to such a parse tree as a UD parse tree.2 Furthermore,(Surdeanu et al., 2003) has demonstrated that correct annotation of predicate-argument structure can improve the performance of information extraction systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "E.g. UAS (unlabeled attachment score) and LAS (labeled attachment score).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See(Hale, 1993;Hale and Keyser, 2002) for further reference on predicate-argument structure.5 See universaldependencies.org/u/dep/ for more details.6 xcomp is often used to model control/raising constructions in which an argument in the embedded clause establishes a syntactic relation with the predicate in the matrix clause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See Table 4in the appendix for a complete listing of the distribution of compchains in the Test and Training treebank for each of the 61 languages.11 See Table-5 for a complete listing of performance on the compchain classification task for each UD treebank using the associated baseline model, including a breakdown of performance per-compchain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These results also hold when comparing the mean F1scores for compchains of length 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank three anonymous reviewers for their valuable feedback and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "TotalR RC RX RCC RCX RXC RXXvietnamese-vtb 1400/800 48.1/48.9 11.4/10.8 10.9/8.2 1.1/1.5 1.6/1.4 0.6/1.0 0.9/0.2 chinese-gsd 3997/500 52.6/53.4 11.1/11.4 10.0/8.6 1.2/0.6 1.5/1.0 1.1/0.8 0.9/0.4 catalan-ancora 13123/1846 65.0/66.5 10.1/7.9 5.8/6.1 0.6/0.2 0.6/0.5 0.2/0.1 0.2/0.2 serbian-set 2935/491 53.2/58.0 10.4/10.0 4.9/5.5 0.7/0.4 1.7/0.8 0.3/--/spanish-ancora 14305/1721 59.5/58.8 12.2/13.5 5.3/5.5 0.8/0.5 0.9/0.8 0.1/-0.1/0.2 greek-gdt 1662/456 62.1/59.9 13.0/9.9 5.6/5.0 1.0/0.4 1.1/0.7 0.4/0.7 -/galician-ctg 2272/861 58.5/57.6 14.0/13.6 1.8/1.3 2.2/1.4 0.2/0.6 0.1/--/0.1 persian-seraji 4798/600 46.5/46.5 15.1/18.2 0.1/-2.4/2.2 -/--/--/romanian-rrt 8043/729 72.4/71.9 10.1/12.1 1.5/2.3 0.7/1.2 0.1/-0.0/--/korean-kaist 23010/2287 61.3/61.1 8.3/6.8 0.1/-0.3/0.2 -/--/--/bulgarian-btb 8907/1116 66.1/64.7 9.5/14.0 3.5/1.3 0.7/0.6 0.3/0.1 0.3/-0.1/slovak-snk 8483/1061 66.1/59.8 8.2/2.1 4.6/3.3 0.3/-1.0/0.5 0.4/0.1 0.0/portuguese-bosque 8329/477 58.1/59.3 7.6/8.4 4.5/2.5 0.5/0.2 0.6/0.4 0.3/-0.3/latin-proiel 15906/1260 67.9/64.6 7.6/6.7 5.3/6.0 0.4/0.6 0.3/0.6 0.2/0.1 0.1/0.1 latvian-lvtb 5424/1228 62.3/60.7 7.7/7.2 6.2/4.8 0.4/0.2 0.7/0.9 0.5/0.7 0.1/czech-fictree 10160/1291 63.7/59.5 7.7/8.1 6.3/8.1 0.3/0.4 1.0/1.0 0.5/0.7 0.0/hebrew-htb 5241/491 62.5/61.7 6.5/3.7 6.8/6.1 0.2/-0.7/1.0 0. 15015/1047 72.9/72.5 5.6/4.9 5.2/6.7 0.2/0.3 0.2/0.6 0.1/0.1 0.1/0.1 slovenian-ssj 6478/788 65.5/62.2 6.0/7.4 4.8/5.7 0.2/0.4 0.6/0.8 0.4/0.6 0.0/0.1 danish-ddt 4383/565 59.5/57.0 6.6/9.9 3.7/4.1 0.2/0.4 0.3/0.5 0.0/0.2 0.0/finnish-ftb 14981/1867 65.0/64.9 5.4/6.4 Table 5 : F1-Scores for Compchains Classifications for each UD 2.2 Gold Treebanks. The test section of each gold treebank was parsed using the corresponding pre-trained UDPipe language model; the compchain classification was computed for each pair of gold and parsed treebanks, and we report: (i) the weighted average F1-score (over all compchains); (ii) the (per-class) F1-score for each compchain. Entries for which the F1-score could not be computed due to a lack of support are marked with a dash (\"-\").",
"cite_spans": [],
"ref_spans": [
{
"start": 1573,
"end": 1580,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Treebank",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Lines: An english-swedish parallel treebank",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Ahrenberg",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 16th Nordic Conference of Computational Linguistics (NODAL-IDA 2007)",
"volume": "",
"issue": "",
"pages": "270--273",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Ahrenberg. 2007. Lines: An english-swedish par- allel treebank. In Proceedings of the 16th Nordic Conference of Computational Linguistics (NODAL- IDA 2007), pages 270-273.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The acquisition of syntactic knowledge",
"authors": [
{
"first": "C",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Berwick",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert C. Berwick. 1985. The acquisition of syntactic knowledge. MIT press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Conll-x shared task on multilingual dependency parsing",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Buchholz",
"suffix": ""
},
{
"first": "Erwin",
"middle": [],
"last": "Marsi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the tenth conference on computational natural language learning",
"volume": "",
"issue": "",
"pages": "149--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Buchholz and Erwin Marsi. 2006. Conll-x shared task on multilingual dependency parsing. In Proceedings of the tenth conference on computa- tional natural language learning, pages 149-164. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bi-directional attention with agreement for dependency parsing",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1608.02076"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Cheng, Hao Fang, Xiaodong He, Jianfeng Gao, and Li Deng. 2016. Bi-directional attention with agreement for dependency parsing. arXiv preprint arXiv:1608.02076.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Universal stanford dependencies: A cross-linguistic typology",
"authors": [
{
"first": "Marie-Catherine De",
"middle": [],
"last": "Marneffe",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Silveira",
"suffix": ""
},
{
"first": "Katri",
"middle": [],
"last": "Haverinen",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "14",
"issue": "",
"pages": "4585--4592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine De Marneffe, Timothy Dozat, Na- talia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D Manning. 2014. Universal stanford dependencies: A cross-linguistic typology. In LREC, volume 14, pages 4585-4592.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The World Atlas of Language Structures Online",
"authors": [
{
"first": "Matthew",
"middle": [
"S"
],
"last": "Dryer",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew S. Dryer. 2013. Order of subject, object and verb. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures On- line. Max Planck Institute for Evolutionary Anthro- pology, Leipzig.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The 2018 shared task on extrinsic parser evaluation: on the downstream utility of english universal dependency parsers",
"authors": [
{
"first": "Murhaf",
"middle": [],
"last": "Fares",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "Ovrelid",
"suffix": ""
},
{
"first": "Jari",
"middle": [],
"last": "Bjorne",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "22--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murhaf Fares, Stephan Oepen, Lilja Ovrelid, Jari Bjorne, and Richard Johansson. 2018. The 2018 shared task on extrinsic parser evaluation: on the downstream utility of english universal dependency parsers. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 22-33.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On argument structure and the lexical expression of syntactic relations",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Hale",
"suffix": ""
}
],
"year": 1993,
"venue": "The view from Building 20: Essays in linguistics in honor of Sylvain Bromberger",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Hale. 1993. On argument structure and the lexical expression of syntactic relations. In Ken Hale and Samuel J. Keyser, editors, The view from Building 20: Essays in linguistics in honor of Syl- vain Bromberger. MIT Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Prolegomenon to a theory of argument structure",
"authors": [
{
"first": "Locke",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"Jay"
],
"last": "Hale",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Keyser",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Locke Hale and Samuel Jay Keyser. 2002. Prolegomenon to a theory of argument structure, vol- ume 39. MIT press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Simple and accurate dependency parsing using bidirectional lstm feature representations",
"authors": [
{
"first": "Eliyahu",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "313--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- ple and accurate dependency parsing using bidirec- tional lstm feature representations. Transactions of the Association for Computational Linguistics, 4:313-327.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Dragnn: A transitionbased framework for dynamically connected neural networks",
"authors": [
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Andor",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Bogatyy",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weiss",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.04474"
]
},
"num": null,
"urls": [],
"raw_text": "Lingpeng Kong, Chris Alberti, Daniel Andor, Ivan Bo- gatyy, and David Weiss. 2017. Dragnn: A transition- based framework for dynamically connected neural networks. arXiv preprint arXiv:1703.04474.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Parser showdown at the wall street corral: An empirical investigation of error types in parser output",
"authors": [
{
"first": "K",
"middle": [],
"last": "Jonathan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Kummerfeld",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "James R Curran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1048--1059",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan K Kummerfeld, David Hall, James R Cur- ran, and Dan Klein. 2012. Parser showdown at the wall street corral: An empirical investigation of er- ror types in parser output. In Proceedings of the 2012 Joint Conference on Empirical Methods in Nat- ural Language Processing and Computational Natu- ral Language Learning, pages 1048-1059. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The child's trigger experience: Degree-0 learnability",
"authors": [
{
"first": "David",
"middle": [],
"last": "Lightfoot",
"suffix": ""
}
],
"year": 1989,
"venue": "Behavioral and Brain Sciences",
"volume": "12",
"issue": "2",
"pages": "321--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Lightfoot. 1989. The child's trigger experience: Degree-0 learnability. Behavioral and Brain Sci- ences, 12(2):321-334.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Breaking NLP: Using morphosyntax, semantics, pragmatics and world knowledge to fool sentiment analysis systems",
"authors": [
{
"first": "Taylor",
"middle": [],
"last": "Mahler",
"suffix": ""
},
{
"first": "Willy",
"middle": [],
"last": "Cheung",
"suffix": ""
},
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Cory",
"middle": [],
"last": "Shain",
"suffix": ""
},
{
"first": "Symon",
"middle": [],
"last": "Stevens-Guille",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems",
"volume": "",
"issue": "",
"pages": "33--39",
"other_ids": {
"DOI": [
"10.18653/v1/W17-5405"
]
},
"num": null,
"urls": [],
"raw_text": "Taylor Mahler, Willy Cheung, Micha Elsner, David King, Marie-Catherine de Marneffe, Cory Shain, Symon Stevens-Guille, and Michael White. 2017. Breaking NLP: Using morphosyntax, semantics, pragmatics and world knowledge to fool sentiment analysis systems. In Proceedings of the First Work- shop on Building Linguistically Generalizable NLP Systems, pages 33-39, Copenhagen, Denmark. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Characterizing the errors of data-driven dependency parsing models",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald and Joakim Nivre. 2007. Character- izing the errors of data-driven dependency parsing models. In Proceedings of the 2007 Joint Confer- ence on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Online learning of approximate dependency parsing algorithms",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "11th Conference of the European Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algo- rithms. In 11th Conference of the European Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Underspecified universal dependency structures as inputs for multilingual surface realisation",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "199--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Anja Belz, Bernd Bohnet, and Leo Wan- ner. 2018. Underspecified universal dependency structures as inputs for multilingual surface reali- sation. In Proceedings of the 11th International Conference on Natural Language Generation, pages 199-209.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "From simple input to complex grammar",
"authors": [
{
"first": "",
"middle": [],
"last": "James L Morgan",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James L Morgan. 1986. From simple input to complex grammar. The MIT Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Identifying cascading errors using constraints in dependency parsing",
"authors": [
{
"first": "Dominick",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "James R Curran",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1148--1158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominick Ng and James R Curran. 2015. Identify- ing cascading errors using constraints in dependency parsing. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), volume 1, pages 1148-1158.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Universal dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ryan",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Silveira",
"suffix": ""
}
],
"year": 2016,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan T McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal de- pendencies v1: A multilingual treebank collection. In LREC.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Universal dependency evaluation",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Chiao-Ting",
"middle": [],
"last": "Fang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies, 22 May",
"volume": "135",
"issue": "",
"pages": "86--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and Chiao-Ting Fang. 2017. Univer- sal dependency evaluation. In Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependen- cies, 22 May, Gothenburg Sweden, 135, pages 86- 95. Link\u00f6ping University Electronic Press.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The conll 2007 shared task on dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mc-Donald",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Johan Hall, Sandra K\u00fcbler, Ryan Mc- Donald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The conll 2007 shared task on depen- dency parsing. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning (EMNLP-CoNLL).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Labeled pseudo-projective dependency parsing with support vector machines",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "Svetoslav",
"middle": [],
"last": "Marinov",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL-X)",
"volume": "",
"issue": "",
"pages": "221--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Johan Hall, Jens Nilsson, Svetoslav Marinov, et al. 2006. Labeled pseudo-projective dependency parsing with support vector machines. In Proceedings of the Tenth Conference on Com- putational Natural Language Learning (CoNLL-X), pages 221-225.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The 2017 shared task on extrinsic parser evaluation towards a reusable community infrastructure",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Jari",
"middle": [],
"last": "Ovrelid",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Bjorne",
"suffix": ""
},
{
"first": "Emanuele",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Lapponi",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Velldal",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Shared Task on Extrinsic Parser Evaluation",
"volume": "",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Oepen, L Ovrelid, Jari Bjorne, Richard Johans- son, Emanuele Lapponi, Filip Ginter, and Erik Vell- dal. 2017. The 2017 shared task on extrinsic parser evaluation towards a reusable community infrastruc- ture. Proceedings of the 2017 Shared Task on Ex- trinsic Parser Evaluation, pages 1-16.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Stanza: A Python natural language processing toolkit for many human languages",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuhui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bolton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Mark Steedman, and Mirella Lapata",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "Tackstrom",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.03196"
]
},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Oscar Tackstrom, Slav Petrov, Mark Steed- man, and Mirella Lapata. 2017. Universal semantic parsing. arXiv preprint arXiv:1702.03196.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Enhanced english universal dependencies: An improved representation for natural language understanding tasks",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "23--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Schuster and Christopher D Manning. 2016. Enhanced english universal dependencies: An im- proved representation for natural language under- standing tasks. In LREC, pages 23-28. Portoro\u017e, Slovenia.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A gold standard dependency corpus for english",
"authors": [
{
"first": "Natalia",
"middle": [],
"last": "Silveira",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Marie-Catherine De",
"middle": [],
"last": "Marneffe",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "2897--2904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natalia Silveira, Timothy Dozat, Marie-Catherine De Marneffe, Samuel R Bowman, Miriam Connor, John Bauer, and Christopher D Manning. 2014. A gold standard dependency corpus for english. In LREC, pages 2897-2904.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe",
"authors": [
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "88--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milan Straka and Jana Strakov\u00e1. 2017. Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe. Proceedings of the CoNLL 2017 Shared Task: Mul- tilingual Parsing from Raw Text to Universal Depen- dencies, pages 88-99.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Using predicate-argument structures for information extraction",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Sanda",
"middle": [],
"last": "Harabagiu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Aarseth",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Sanda Harabagiu, John Williams, and Paul Aarseth. 2003. Using predicate-argument struc- tures for information extraction. In Proceedings of the 41st Annual Meeting of the Association for Com- putational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Formal principles of language acquisition",
"authors": [
{
"first": "",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"W"
],
"last": "Wexler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Culicover",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth. Wexler and Peter W. Culicover. 1980. Formal principles of language acquisition. MIT Press.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Universal decompositional semantics on universal dependencies",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Steven White",
"suffix": ""
},
{
"first": "Drew",
"middle": [],
"last": "Reisinger",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Vieira",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Rawlins",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1713--1723",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Steven White, Drew Reisinger, Keisuke Sak- aguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Uni- versal decompositional semantics on universal de- pendencies. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 1713-1723.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The gum corpus: Creating multilayer resources in the classroom",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zeldes",
"suffix": ""
}
],
"year": 2017,
"venue": "Lang. Resour. Eval",
"volume": "51",
"issue": "3",
"pages": "581--612",
"other_ids": {
"DOI": [
"10.1007/s10579-016-9343-x"
]
},
"num": null,
"urls": [],
"raw_text": "Amir Zeldes. 2017. The gum corpus: Creating mul- tilayer resources in the classroom. Lang. Resour. Eval., 51(3):581-612.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "CoNLL 2018 shared task: Multilingual parsing from raw text to universal dependencies",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "1--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Zeman, Jan Haji\u010d, Martin Popel, Martin Pot- thast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Mul- tilingual parsing from raw text to universal depen- dencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 1-21, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Juhani",
"middle": [],
"last": "Luotolahti",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Badmaeva",
"suffix": ""
},
{
"first": "Memduh",
"middle": [],
"last": "Gokirmak",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Nedoluzhko",
"suffix": ""
},
{
"first": "Silvie",
"middle": [],
"last": "Cinkova",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic Jr",
"suffix": ""
},
{
"first": "Jaroslava",
"middle": [],
"last": "Hlavacova",
"suffix": ""
},
{
"first": "V\u00e1clava",
"middle": [],
"last": "Kettnerov\u00e1",
"suffix": ""
},
{
"first": "Zdenka",
"middle": [],
"last": "Uresova",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Stina",
"middle": [],
"last": "Ojala",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Missil\u00e4",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Dima",
"middle": [],
"last": "Taji",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Herman",
"middle": [],
"last": "Leung",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Simi",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Kanayama",
"suffix": ""
},
{
"first": "Valeria",
"middle": [],
"last": "Depaiva",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Droganova",
"suffix": ""
},
{
"first": "\u00c7agr\u0131",
"middle": [],
"last": "H\u00e9ctor Mart\u00ednez Alonso",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "\u00c7\u00f6ltekin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--19",
"other_ids": {
"DOI": [
"10.18653/v1/K17-3001"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Zeman, Martin Popel, Milan Straka, Jan Ha- jic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Fran- cis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic jr., Jaroslava Hlavacova, V\u00e1clava Kettnerov\u00e1, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Mis- sil\u00e4, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Le- ung, Marie-Catherine de Marneffe, Manuela San- guinetti, Maria Simi, Hiroshi Kanayama, Valeria dePaiva, Kira Droganova, H\u00e9ctor Mart\u00ednez Alonso, \u00c7agr\u0131 \u00c7\u00f6ltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirch- ner, Hector Fernandez Alcalde, Jana Strnadov\u00e1, Esha Banerjee, Ruli Manurung, Antonio Stella, At- suko Shimada, Sookyoung Kwak, Gustavo Men- donca, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. Conll 2017 shared task: Multilingual pars- ing from raw text to universal dependencies. In Pro- ceedings of the CoNLL 2017 Shared Task: Multi- lingual Parsing from Raw Text to Universal Depen- dencies, pages 1-19. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Dependency parsing as head selection",
"authors": [
{
"first": "Xingxing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.01280"
]
},
"num": null,
"urls": [],
"raw_text": "Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2016. Dependency parsing as head selection. arXiv preprint arXiv:1606.01280.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A tale of two parsers: investigating and combining graphbased and transition-based dependency parsing using beam-search",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2008. A tale of two parsers: investigating and combining graph- based and transition-based dependency parsing us- ing beam-search. In Proceedings of the Conference",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Examples of compchain classifications (left) for four UD parses (right). The parses in",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Confusion Matrices for Compchain Classification of the EWT, GUM and LinES UD (English) treebanks using their respective CoNLL'18 UDPipe Baseline Models.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Distribution of Compchains in UD Training and Test Treebanks. 59 of the 61 languages had degree-2 compchains present in the test treebank; the languages with no degree-2 compchains in the test treebank were turkish-imst and urdu-udtb.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "F1-scores for Length 2 vs. Length 1 compchains for each language in the UD treebank.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "F1-scores for Length 2 compchains (i.e. RC and RX) for each language in the UD treebanks.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "Histogram of Area-under-Curve (AUC) of Receiver Operator Characteristic (ROC) curve for Logistic Regression model of per-Sentence Compchain Classification Accuracy vs. log(Sentence Length). The AUC of ROC curve was computed for each UD test treebank.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Compchain</td><td/><td/><td>EWT</td><td/><td/><td colspan=\"2\">LinES</td><td/><td/><td colspan=\"2\">GUM</td></tr><tr><td/><td colspan=\"4\">F1 Prec. Rec. Support</td><td colspan=\"4\">F1 Prec. Rec. Support</td><td colspan=\"4\">F1 Prec. Rec. Support</td></tr><tr><td>/ 0</td><td>0.94</td><td colspan=\"2\">0.95 0.94</td><td colspan=\"2\">1065 0.74</td><td colspan=\"2\">0.72 0.75</td><td colspan=\"2\">224 0.85</td><td>0.81</td><td>0.9</td><td>268</td></tr><tr><td>R</td><td>0.89</td><td>0.89</td><td>0.9</td><td colspan=\"2\">806 0.85</td><td colspan=\"2\">0.87 0.83</td><td colspan=\"2\">580 0.85</td><td colspan=\"2\">0.89 0.81</td><td>419</td></tr><tr><td>RC</td><td>0.73</td><td colspan=\"2\">0.72 0.73</td><td colspan=\"2\">79 0.43</td><td colspan=\"2\">0.44 0.42</td><td colspan=\"2\">43 0.54</td><td colspan=\"2\">0.53 0.55</td><td>33</td></tr><tr><td>RX</td><td>0.79</td><td colspan=\"2\">0.8 0.79</td><td colspan=\"2\">104 0.53</td><td colspan=\"2\">0.44 0.66</td><td colspan=\"2\">50 0.64</td><td colspan=\"2\">0.57 0.73</td><td>41</td></tr><tr><td>RCC</td><td>0.67</td><td colspan=\"2\">0.67 0.67</td><td>6</td><td>0</td><td>0</td><td>0</td><td>2</td><td>0.4</td><td>0.33</td><td>0.5</td><td>2</td></tr><tr><td>RCX</td><td>0.4</td><td colspan=\"2\">0.5 0.33</td><td colspan=\"2\">9 0.25</td><td colspan=\"2\">0.5 0.17</td><td>6</td><td>0.5</td><td>0.33</td><td>1</td><td>1</td></tr><tr><td>RXC</td><td>0.33</td><td colspan=\"2\">0.33 0.33</td><td colspan=\"2\">3 0.55</td><td>0.6</td><td>0.5</td><td>6</td><td>0</td><td>0</td><td>0</td><td>2</td></tr><tr><td>RXX</td><td>1</td><td>1</td><td>1</td><td colspan=\"2\">2 0.44</td><td colspan=\"2\">0.33 0.67</td><td>3</td><td>0.4</td><td colspan=\"2\">0.5 0.33</td><td>3</td></tr><tr><td>W. Avg.</td><td>0.9</td><td>0.9</td><td>0.9</td><td colspan=\"2\">2077 0.78</td><td colspan=\"2\">0.78 0.77</td><td colspan=\"2\">914 0.82</td><td colspan=\"2\">0.83 0.82</td><td>769</td></tr></table>",
"text": "Distributions of compchains across the three treebanks. Counts for compchains with four or more dependency relations are not listed here because their presence in the three treebanks was negligible, although they are included in the \"Total\" count. Although there are very few compchains with three or more dependency relations (e.g. RCC) in the test sections of the treebanks, there are a non-negligible number of them in the training sections."
},
"TABREF4": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>: Coefficient of determination (R 2 ) for pair-</td></tr><tr><td>wise (linear) correlations of metric-scores over all</td></tr><tr><td>CoNLL'18 Shared Task baseline models.</td></tr></table>",
"text": ""
}
}
}
}