Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S19-1024",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:46:01.197347Z"
},
"title": "Automatic Accuracy Prediction for AMR Parsing",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Opitz",
"suffix": "",
"affiliation": {
"laboratory": "Research Training Group AIPHES Leibniz ScienceCampus \"Empirical Linguistics and Computational Language Modeling\" Department for Computational Linguistics",
"institution": "",
"location": {
"postCode": "69120",
"settlement": "Heidelberg"
}
},
"email": "[email protected]"
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": "",
"affiliation": {
"laboratory": "Research Training Group AIPHES Leibniz ScienceCampus \"Empirical Linguistics and Computational Language Modeling\" Department for Computational Linguistics",
"institution": "",
"location": {
"postCode": "69120",
"settlement": "Heidelberg"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Meaning Representation (AMR) represents sentences as directed, acyclic and rooted graphs, aiming at capturing their meaning in a machine readable format. AMR parsing converts natural language sentences into such graphs. However, evaluating a parser on new data by means of comparison to manually created AMR graphs is very costly. Also, we would like to be able to detect parses of questionable quality, or preferring results of alternative systems by selecting the ones for which we can assess good quality. We propose AMR accuracy prediction as the task of predicting several metrics of correctness for an automatically generated AMR parse-in absence of the corresponding gold parse. We develop a neural end-to-end multi-output regression model and perform three case studies: firstly, we evaluate the model's capacity of predicting AMR parse accuracies and test whether it can reliably assign high scores to gold parses. Secondly, we perform parse selection based on predicted parse accuracies of candidate parses from alternative systems, with the aim of improving overall results. Finally, we predict system ranks for submissions from two AMR shared tasks on the basis of their predicted parse accuracy averages. All experiments are carried out across two different domains and show that our method is effective.",
"pdf_parse": {
"paper_id": "S19-1024",
"_pdf_hash": "",
"abstract": [
{
"text": "Meaning Representation (AMR) represents sentences as directed, acyclic and rooted graphs, aiming at capturing their meaning in a machine readable format. AMR parsing converts natural language sentences into such graphs. However, evaluating a parser on new data by means of comparison to manually created AMR graphs is very costly. Also, we would like to be able to detect parses of questionable quality, or preferring results of alternative systems by selecting the ones for which we can assess good quality. We propose AMR accuracy prediction as the task of predicting several metrics of correctness for an automatically generated AMR parse-in absence of the corresponding gold parse. We develop a neural end-to-end multi-output regression model and perform three case studies: firstly, we evaluate the model's capacity of predicting AMR parse accuracies and test whether it can reliably assign high scores to gold parses. Secondly, we perform parse selection based on predicted parse accuracies of candidate parses from alternative systems, with the aim of improving overall results. Finally, we predict system ranks for submissions from two AMR shared tasks on the basis of their predicted parse accuracy averages. All experiments are carried out across two different domains and show that our method is effective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Abstract Meaning Representation (AMR) (Banarescu et al., 2013) represents the semantic structure of a sentence, including concepts, semantic operators and relations, sense-disambiguated predicates and their arguments. As a machine readable representation of the meaning of a sentence, AMR is potentially useful for many NLP tasks. Among other applications it has been used in machine translation (Jones et al., 2012 ), text (a / asbestos :polarity -:time (n / now) :location (t / thing :ARG1-of (p / produce-01 :ARG0 (w / we))))",
"cite_spans": [
{
"start": 38,
"end": 62,
"text": "(Banarescu et al., 2013)",
"ref_id": "BIBREF0"
},
{
"start": 396,
"end": 415,
"text": "(Jones et al., 2012",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Figure 1: Humanly produced AMR for: There is no asbestos in our products now. Numbered predicates refer to PropBank senses (Palmer et al., 2005) .",
"cite_spans": [
{
"start": 123,
"end": 144,
"text": "(Palmer et al., 2005)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "summarization (Liu et al., 2015; Dohare and Karnick, 2017) and question answering (Mitra and Baral, 2016) . Since the introduction of AMR, many approaches to AMR parsing have been proposed: graph-based pipeline systems which rely on an alignment step (Flanigan et al., 2014 (Flanigan et al., , 2016 or transition-based parsers relying on dependency annotation (Wang et al., 2015b (Wang et al., ,a, 2016a . In the following we will denote the former by JAMR and the latter by CAMR. More recently, endto-end neural systems have been proposed which produce linearized AMR graphs within characterbased (van Noord and Bos, 2017b) or word-based (Konstas et al., 2017) encoding models. Both approaches greatly profit from large amounts of silver training data. The silver data is obtained with self-training (Konstas et al., 2017) or the aid of additional parsers, where only parses with considerable agreement are chosen to extend the training data (van Noord and Bos, 2017b) . Lyu and Titov (2018) formulate a neural model that jointly predicts alignments, concepts and relations. Their system -henceforth called GPLA (Graph Prediction with Latent Alignments) -defines the current state-of-the-art in AMR parsing.",
"cite_spans": [
{
"start": 14,
"end": 32,
"text": "(Liu et al., 2015;",
"ref_id": "BIBREF28"
},
{
"start": 33,
"end": 58,
"text": "Dohare and Karnick, 2017)",
"ref_id": "BIBREF13"
},
{
"start": 82,
"end": 105,
"text": "(Mitra and Baral, 2016)",
"ref_id": "BIBREF32"
},
{
"start": 251,
"end": 273,
"text": "(Flanigan et al., 2014",
"ref_id": "BIBREF15"
},
{
"start": 274,
"end": 298,
"text": "(Flanigan et al., , 2016",
"ref_id": "BIBREF14"
},
{
"start": 360,
"end": 379,
"text": "(Wang et al., 2015b",
"ref_id": "BIBREF49"
},
{
"start": 380,
"end": 403,
"text": "(Wang et al., ,a, 2016a",
"ref_id": "BIBREF46"
},
{
"start": 598,
"end": 624,
"text": "(van Noord and Bos, 2017b)",
"ref_id": "BIBREF35"
},
{
"start": 639,
"end": 661,
"text": "(Konstas et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 801,
"end": 823,
"text": "(Konstas et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 943,
"end": 969,
"text": "(van Noord and Bos, 2017b)",
"ref_id": "BIBREF35"
},
{
"start": 972,
"end": 992,
"text": "Lyu and Titov (2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A system that can perform accuracy prediction for AMR parsing can be used in a variety of ways: (i) estimating the quality of downstream tasks that deploy AMR parses. E.g., in a document summarization scenario, we might expect lower qual-ity of a summary if the estimated quality of AMR parses used as a basis for the summary is low; (ii) AMR parsing accuracy estimation can be used to produce high-quality automatically parsed data: by filtering the outputs of single parsing systems in self-training, by selecting high-quality outputs from different parsing systems in a tri-parsing setting, or else by predicting overall rankings over alternative parsing systems applied to in-or outof-domain data; (iii) finally, AMR parse accuracy prediction could be used in the context of a parsersupported treebank construction process. E.g., in an active learning scenario, we can select useful targets for manual annotation based on their expected efficiency for parser improvement -the fine-grained evaluation measures predicted by our system can be used for targeted improvements. In the simplest case, we can provide the human annotator with automatic parses where only few flaws have to be mended. Hence, AMR accuracy prediction systems have the potential to tremendously reduce manual annotation cost and time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We define AMR accuracy prediction as the task of predicting a rich suite of metrics to assess various subtasks covered by AMR parsing (e.g. negation detection or semantic role labeling). To approach this task, we use the AMR evaluation suite suggested by Damonte et al. (2017) and develop a hierarchical multi-output regression model for automatically performing evaluation of 12 different tasks involved in AMR parsing (Sections \u00a73 and \u00a74; our code is publicly accessible 1 ). We perform experiments in three different scenarios on unseen in-domain and out-of-domain data and show that our model (i) is able to predict scores with significant correlation to gold scores and (ii) can be used to rank parses on a sentencelevel or to rank parsers on a corpus-level ( \u00a75).",
"cite_spans": [
{
"start": 255,
"end": 276,
"text": "Damonte et al. (2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contributions",
"sec_num": null
},
{
"text": "Automatic accuracy prediction for syntactic parsing comes closest to what we are doing. Ravi et al. (2008) propose a feature-based SVM regression model with RBF kernel that predicts syntactic parser performance on different domains. Like us, they aim at a cheap and effective means for estimating a parser's performance. However, in contrast to their work, our method is domain and parser agnostic: we do not take into account characteristics of the domains of interest and do not provide any performance statistics of the competing parsing systems as features to our regressor. Biici (2016) addresses the task without any domain-dependent features, which results in a lower correlation to gold scores -even if additional features from a background language model are incorporated. In contrast to the prior systems that predict a single score, we predict an ensemble of metrics suitable for assessing AMR parse quality with respect to different linguistic aspects. Also, our system does not rely on externally derived features or complex pre-processing. Moreover, an AMR graph differs in important ways from a syntactic tree. Nodes in AMR do not explicitly correspond to words (as in dependency trees) or phrases (as in constituency trees). AMR structure elements can exist without any alignment to words in the sentence. To our knowledge, we are the first to propose an accuracy prediction model for AMR parsing, and offer the first general end-to-end parse accuracy prediction model that predicts an ensemble of scores for different linguistic aspects.",
"cite_spans": [
{
"start": 88,
"end": 106,
"text": "Ravi et al. (2008)",
"ref_id": "BIBREF40"
},
{
"start": 579,
"end": 591,
"text": "Biici (2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Automatic accuracy prediction has also been researched for PoS-tagging (Van Asch and Daelemans, 2010) and in machine translation. For example, Soricut and Narsale (2012) predict BLEU scores for machine-produced translations. Under the umbrella of quality estimation researchers try to predict, i.a., the post-editing time or missing words in an automatic translation (Cai and Knight, 2013; Joshi et al., 2016; Chatterjee et al., 2018; Kim et al., 2017; Specia et al., 2013) . The fact that manually creating an AMR graph is significantly more costly than a translation provides another compelling argument for investigating automatic AMR accuracy prediction techniques . 2 In recent work, Smith (2011, 2017) ; Jain et al. (2015); Rehbein and Ruppenhofer (2018) detect annotation errors in automatically produced dependency parses. The latter approach uses active learning and ensemble parsing in combination with variational inference. They predict edge labelling and attachment errors and use a back-and-forth encoding mechanism from non-structured to structured tree data in order to provide the variational inference model with the (a / asbestos (a / asbestos :time (n / now) :polarity -:polarity -:location (p / product) :location (p / product :time (n / now)) :poss (w / we))) __________________________ (a / asbesto metr.(F1)| GP JA Figure 2 : Three AMR parses for: There is no asbestos in our products now, generated by GPLA (top), JAMR (bottom), CAMR (right). Light and severe errors are found in GPLA and JAMR parses; CAMR fails to provide we, the manufacturer of the product. Bottom right: F1 for Smatch and three example subtasks from evaluation against the gold parse (given in Figure 1 ",
"cite_spans": [
{
"start": 143,
"end": 169,
"text": "Soricut and Narsale (2012)",
"ref_id": "BIBREF43"
},
{
"start": 367,
"end": 389,
"text": "(Cai and Knight, 2013;",
"ref_id": "BIBREF7"
},
{
"start": 390,
"end": 409,
"text": "Joshi et al., 2016;",
"ref_id": "BIBREF23"
},
{
"start": 410,
"end": 434,
"text": "Chatterjee et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 435,
"end": 452,
"text": "Kim et al., 2017;",
"ref_id": "BIBREF24"
},
{
"start": 453,
"end": 473,
"text": "Specia et al., 2013)",
"ref_id": "BIBREF44"
},
{
"start": 671,
"end": 672,
"text": "2",
"ref_id": null
},
{
"start": 689,
"end": 707,
"text": "Smith (2011, 2017)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1339,
"end": 1347,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1690,
"end": 1698,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "CA | :polarity - ---------|--------------| :ARG1 (w /",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Automatic AMR parses are often deficient. Consider the examples in Figure 2 . All parsers correctly detect the negation and its scope. The GPLA parse (top) provides a graph structure close to the gold annotation ( Figure 1 ). However, it does not correctly analyze the possessive our (product), which in the gold parse is represented as an object produced by the speaker (we). Instead it recognizes a location in the speaker's possession. JAMR (middle) fails to detect the concept in focus (asbestos), possibly due to a false-positive stemming mistake. Moreover, it fails to represent that asbestos is (not) in the product: it misses the :location-edge from asbestos to product.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 75,
"text": "Figure 2",
"ref_id": null
},
{
"start": 214,
"end": 222,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy Metrics for AMR Parsing",
"sec_num": "3"
},
{
"text": "AMR accuracy metrics Usually, a predicted AMR graph G is evaluated against a gold graph G using triple matching based on a maximally scoring variable mapping. For finding the optimal variable mapping, Integer Linear Programming (ILP) can be used in the Smatch metric (Cai and Knight, 2013) , which produces precision, recall and F1 score between G and G . While it is important to obtain a global measure of parse accuracy, we may also be interested in a quality assessment that focuses on specific subtasks or meaning aspects, such as entity linking, negation detection or word sense disambiguation (WSD). For example, if a parser commits a WSD error this might be less harmful than e.g., failing to capture negation, or missing or wrongly predicting a semantic role. However, the Smatch calculation would treat many of such errors with equal weight -a property which in some cases may be undesirable.",
"cite_spans": [
{
"start": 267,
"end": 289,
"text": "(Cai and Knight, 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy Metrics for AMR Parsing",
"sec_num": "3"
},
{
"text": "To alleviate this issue, Damonte et al. 2017proposed an extended AMR evaluation suite which allows parser performance inspection with regard to 11 additional subtasks captured by AMR. In total, 36 metrics can be computed (precision, recall and F1 for 12 tasks). F1 scores for three example metrics are displayed in Figure 2 (bottom, right): Smatch, SRL (Smatch computed on arg-i roles), IgnoreVars (triple overlap after replacing variables with concepts) and Concepts (F1 for concept identification). 3 GPLA produces the overall best parse but it is is outperformed by the other systems in SRL (JAMR) and IgnoreVars (CAMR).",
"cite_spans": [],
"ref_spans": [
{
"start": 315,
"end": 323,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy Metrics for AMR Parsing",
"sec_num": "3"
},
{
"text": "We adopt the proposed metrics by Damonte et al. (2017) and use them as target metrics for our task of AMR parse accuracy prediction. Given an automatic AMR graph G and a corresponding sentence S, we estimate precision, recall and F1 of the main task (Smatch) and of the subtasks, as they would emerge from comparing G with its gold counterpart G .",
"cite_spans": [
{
"start": 33,
"end": 54,
"text": "Damonte et al. (2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task definition",
"sec_num": null
},
{
"text": "One of our hypotheses is that predicting a wide range of accuracy metric scores for individual aspects of AMR structures will aid our model to better predict the global Smatch scores. We will therefore investigate a hierarchical model that builds on predicted subtask measures in order to predict the global smatch score. Being able to predict fine-grained quality aspects of AMR parses will also be useful to assess and exploit differences of alternative system outputs and provides a basis for guiding system development or targeted annotation in an active learning setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task definition",
"sec_num": null
},
{
"text": "We propose a neural hierarchical multi-output regression model for accuracy prediction of AMR ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Accuracy Prediction Model",
"sec_num": "4"
},
{
"text": "- - 01 - - - ... -1 -1 8 -1 -1 -1 ... :ROOT [ join :arg0 [ person \u2026 :ROOT' [ join :nsubj [ Vinken \u2026 -1 -1 8 -1 -1 1 \u2026 Smatch F1 SRL F1 \u2026 WSD F1 0.7 0.3 \u2026 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Accuracy Prediction Model",
"sec_num": "4"
},
{
"text": "Figure 3: Our model: green: Evaluation metrics computed in a non-hierarchical fashion. orange: Main evaluation metric is computed on top of secondary metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FF",
"sec_num": null
},
{
"text": "parses. Its architecture is outlined in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "FF",
"sec_num": null
},
{
"text": "Inputs Our model takes the following inputs: (i) a linearized AMR and a linearized dependency graph (implementation details in \u00a75). The motivation for feeding the dependency parse instead of the original sentence is due to the moderate similarity of dependency and AMR structures. 4 We examine drawbacks and benefits of providing automatic dependency parses more closely in our ablation experiments ( \u00a75.4). In addition, (ii) we produce alignments between sentence tokens and tokens in the sequential AMR structure, as well as between sentence tokens and the linearized dependency structure, and feed these sequences of pointers to our accuracy prediction model. The intuition of using pointers is to provide the model with richer information via shallow alignment between AMR, dependencies and the sequence of sentence tokens (see Section \u00a75 for implementation details). Finally, (iii) we feed a sequence of PropBank sense indicators for AMR predicates.",
"cite_spans": [
{
"start": 281,
"end": 282,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FF",
"sec_num": null
},
{
"text": "Joint encoding of AMR and dependency parses for metric prediction Embedding layers are shared between AMR/dependency pointers and AMR/dependency tokens. We embed the three sequences representing the AMR graph (tokens, pointers and senses) in three matrices and sum them up element-wise (indicated with + in Figure 3 ). The same procedure is applied to the linearized dependency graph (tokens and pointers). The resulting matrices are processed by two two-layered Bi-LSTMs to yield vectorized representations for (i) the AMR graph and (ii) the dependency tree (i.e., the last states of forward and backward reads are concatenated). Thereafter, we apply element-Hierarchical prediction of multiple metrics The task naturally lends itself to be formulated in a hierarchical multi-task setup (orange, Figure 3) . In this strand, we first compute the 33 fine-grained subtask metrics and on their basis we caclulate the Smatch scores (precision, recall, F1) as our primary metrics. In order to accomplish this, we collect the outputs from the subtask metric prediction layer in a vector and concatenate it with the previous layer's representation (\u2295 in Figure 3 ). The resulting vector is fed through a last FF layer to predict the metrics for the task of main interest (Smatch). Our intuition is that the estimated quality of the parse with respect to the subtask metrics informs the model and allows it to better predict the overall quality.",
"cite_spans": [],
"ref_spans": [
{
"start": 307,
"end": 316,
"text": "Figure 3",
"ref_id": null
},
{
"start": 798,
"end": 807,
"text": "Figure 3)",
"ref_id": null
},
{
"start": 1148,
"end": 1156,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "FF",
"sec_num": null
},
{
"text": "Loss In the non-hierarchical case, we denote our full model with f \u03b8 : X \u2192 [0, 1] d with parameters \u03b8, where d describes the dimensionality of the score vector (one dimension represents one metric) and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FF",
"sec_num": null
},
{
"text": "D = {(X i , y i )} N i=1 , y i \u2208 [0, 1] d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FF",
"sec_num": null
},
{
"text": "is our training data. In the non-hierarchical model, we minimize the mean squared error:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FF",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(f \u03b8 ) = 1 dN N i=1 d j=1 (y i,j \u2212 f \u03b8 (X i ) j ) 2",
"eq_num": "(1)"
}
],
"section": "FF",
"sec_num": null
},
{
"text": "For our hierarchical model, we have two functions, f \u03b8 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FF",
"sec_num": null
},
{
"text": "X \u2192 [0, 1] (d\u2212k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FF",
"sec_num": null
},
{
"text": "which returns the output vector for the (d \u2212 k) subtask metrics and f \u03b8 : X \u2192 [0, 1] k which returns the output vector for our k main metrics (in our experiments, k = 3 for Smatch recall, precision and F1). Then,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FF",
"sec_num": null
},
{
"text": "(f \u03b8 , f \u03b8 ) = \u03bb 1 (d \u2212 k)N N i=1 d\u2212k j=1 (y i,j \u2212 f \u03b8 (X i ) j ) 2 + \u03bb 2 kN N i=1 d j=d\u2212k+1 (y i,j \u2212 f \u03b8 (X i ) j\u2212(d\u2212k) ) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FF",
"sec_num": null
},
{
"text": "defines the total loss over the two entangled metric prediction models. Note that \u03b8 \u2282 \u03b8 , which means that by optimizing the parameters of f with gradient descent, we also concurrently optimize all parameters of f . By this construction, the hierarchical model instantiates a two-task model with shared parameters. For our experiments, we manually set the loss weights \u03bb 1 = 0.2, \u03bb 2 = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FF",
"sec_num": null
},
{
"text": "Data Since our goal is to predict the accuracy of an automatic parse, we need a data set containing automatically produced AMR parses and their scores, as they would emerge from comparison to gold parses. Our largest data set, LDC2015E86, comprises 19,572 sentences and comes in a predefined training, development and test split. We parse this data set with three parsers, JAMR (Flanigan et al., 2014 (Flanigan et al., , 2016 , CAMR (Wang et al., 2015b (Wang et al., ,a, 2016a and GPLA (Lyu and Titov, 2018) .",
"cite_spans": [
{
"start": 378,
"end": 400,
"text": "(Flanigan et al., 2014",
"ref_id": "BIBREF15"
},
{
"start": 401,
"end": 425,
"text": "(Flanigan et al., , 2016",
"ref_id": "BIBREF14"
},
{
"start": 433,
"end": 452,
"text": "(Wang et al., 2015b",
"ref_id": "BIBREF49"
},
{
"start": 453,
"end": 476,
"text": "(Wang et al., ,a, 2016a",
"ref_id": "BIBREF46"
},
{
"start": 486,
"end": 507,
"text": "(Lyu and Titov, 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Since the three parsers have been trained on the training data partition, we naturally obtain more accurate parses for the training partition than for development and test data. Table 1 , however, indicates that we still obtain a considerable amount of deficient parses for training. Based on the parser outputs we compute evaluations comparing the automatic parses with the gold parses by using amrevaluation-tool-enhanced 5 , a bug-fixed version of the script that computes the metrics of Damonte et al. (2017) . This allows us to create full-fledged training, development and test instances for our accuracy prediction task. Each instance consists of a sentence and an AMR parse as input and a vector of metric scores as target.",
"cite_spans": [
{
"start": 491,
"end": 512,
"text": "Damonte et al. (2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 178,
"end": 185,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Our second data set, LDC2015R36, comprises submissions to the SemEval-2016 Task 8 (May, 2016) . We have 1053 parses from each of the 11 team submissions (and 2 baseline systems). 6 Our 5 https://github.com/ChunchuanLv/ amr-evaluation-tool-enhanced 6 Riga ( Barzdins and Gosko, 2016) , CMU (equal to JAMR) (Flanigan et al., 2016) , Brandeis (Wang et al., 2016b) , UofR (Peng and Gildea, 2016) , ICL-HD (Brandt et al., 2016) , M2L (Puzikov et al., 2016) , UMD (Rao et al., 2016) third dataset, BioAMRTest is used as the test set in the SemEval-2017 Task 9 (May and Priyadarshi, 2017) and consists of 500 parses from each of the 6 teams. 7 The shared task organizers kindly made this data available for our experiments.",
"cite_spans": [
{
"start": 82,
"end": 93,
"text": "(May, 2016)",
"ref_id": "BIBREF30"
},
{
"start": 257,
"end": 282,
"text": "Barzdins and Gosko, 2016)",
"ref_id": "BIBREF1"
},
{
"start": 305,
"end": 328,
"text": "(Flanigan et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 340,
"end": 360,
"text": "(Wang et al., 2016b)",
"ref_id": "BIBREF47"
},
{
"start": 368,
"end": 391,
"text": "(Peng and Gildea, 2016)",
"ref_id": "BIBREF37"
},
{
"start": 401,
"end": 422,
"text": "(Brandt et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 429,
"end": 451,
"text": "(Puzikov et al., 2016)",
"ref_id": "BIBREF38"
},
{
"start": 458,
"end": 476,
"text": "(Rao et al., 2016)",
"ref_id": "BIBREF39"
},
{
"start": 554,
"end": 581,
"text": "(May and Priyadarshi, 2017)",
"ref_id": "BIBREF31"
},
{
"start": 635,
"end": 636,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Preprocessing For dependency annotation, we parse all sentences with spacyV2.0 8 . For sequentializing the AMR and dependency graph representations we take intuitions from van Noord and Bos (2017b) & Konstas et al. (2017) and output tokens by performing a depth-first-search over the graph. We replace the AMR negation token '-' and strings representing numbers with special tokens. The vocabularies (tokens, senses and pointers) are computed from our training partition of LDC2015E86 and comprise all tokens with a frequency \u2265 5 (tokens with lesser frequency are replaced by an OOV-token). PropBank senses of predicates are removed and collected in an extra list that is parallel to the tokens in the linearized AMR sequence. For each linearized AMR and dependency tree we generate a sequence with index pointers to tokens in the original sentence (-1 for tokens which do not explicitly refer to any token in the sentence, e.g. brackets, 'subj' or 'arg0' relations). Extraction of token-pointers from the dependency graph is trivial. For every concept in the linearized AMR we execute a search for the corresponding token in the sentence, looking for exact matches with surface tokens and lemmas.",
"cite_spans": [
{
"start": 200,
"end": 221,
"text": "Konstas et al. (2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Training For the optimization of the accuracy prediction model we use only the development and training sections of LDC2015E86 and the corresponding automatic parses together with the gold scores. Details on the training cycle can be found in the Supplemental Material \u00a7A (the loss is de-DynamicPower (Butler, 2016) , TMF (Bjerva et al., 2016) , UCL+Sheffield (Goodman et al., 2016) and CU-NLP (Foland and Martin, 2016) . 7 TMF-1 and TMF-2 (van Noord and Bos, 2017a), DAN-GNT (Nguyen and Nguyen, 2017) , Oxford (Buys and Blunsom, 2017), RIGOTRIO (Gruzitis et al., 2017) and JAMR (Flanigan et al., 2016) 8 https://spacy.io/ scribed in \u00a74). We use the same single (hierarchical) model for all three evaluation studies, proving its applicability across different scenarios (a nonhierarchical model is only instantiated for the ablation experiments in Section \u00a75.4).",
"cite_spans": [
{
"start": 301,
"end": 315,
"text": "(Butler, 2016)",
"ref_id": "BIBREF5"
},
{
"start": 322,
"end": 343,
"text": "(Bjerva et al., 2016)",
"ref_id": "BIBREF3"
},
{
"start": 360,
"end": 382,
"text": "(Goodman et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 394,
"end": 419,
"text": "(Foland and Martin, 2016)",
"ref_id": "BIBREF16"
},
{
"start": 422,
"end": 423,
"text": "7",
"ref_id": null
},
{
"start": 476,
"end": 501,
"text": "(Nguyen and Nguyen, 2017)",
"ref_id": "BIBREF33"
},
{
"start": 546,
"end": 569,
"text": "(Gruzitis et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 579,
"end": 602,
"text": "(Flanigan et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The primary goal in our first experiment is to test whether the system is able to differentiate good from bad parses. This capacity is expressed by a high correlation of predicted accuracies with true accuracies on unseen data and by the ability to assign high scores to gold parses. We evaluate on the test partition of LDC2015E86 and BioAMRTest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation with Gold Accuracy",
"sec_num": "5.1"
},
{
"text": "The results are displayed in Table 3 . Over all metrics, in-domain and out-ofdomain, we achieve significant correlations with the gold scores (p < 0.005 for every metric).",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 36,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Correlation results",
"sec_num": null
},
{
"text": "While on LDC2015E86 the model has learned to predict the KB linking F1 (\u03c1 = 0.86) and negation detection F1 with high correlation to the gold scores (\u03c1 = 0.87), Concept assessment poses the greatest challenge (\u03c1 = 0.64). For the out-ofdomain data BioAMRTest, these two facts seem almost reversed: here, the assessment of KB linking poses difficulties (\u03c1 = 0.23) while the Concept F1 predictions are better (\u03c1 = 0.62). The main metrics of interest (Smatch precision, recall and F1) can be predicted with high correlation on indomain data (\u03c1 \u2265 0.74, cf. also Figure 4 ) and solid correlation for out-of-domain data (\u03c1 \u2265 0.41).",
"cite_spans": [],
"ref_spans": [
{
"start": 557,
"end": 565,
"text": "Figure 4",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Correlation results",
"sec_num": null
},
{
"text": "Find the Gold AMR! Now, we want to test our system's capacity to reliably predict high Smatch F1 scores for unseen gold AMR parses. Ideally, the scores should be close or equal to 1. For in-domain data, it appears to work well: a large amount of Smatch predictions for gold AMR graphs are very close to one (Figure 5a) . Evidently, our system also gets the ranking of the parsing systems right: the distribution of the state-of-the-art (GPLA) is shifted right towards higher predicted F1 scores, whereas the distribution of CAMR is shifted left towards lower scores. Also, more than 75% of gold parses have a predicted Smatch score of more than 0.99 (Table 4) .",
"cite_spans": [],
"ref_spans": [
{
"start": 307,
"end": 318,
"text": "(Figure 5a)",
"ref_id": "FIGREF2"
},
{
"start": 650,
"end": 659,
"text": "(Table 4)",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Correlation results",
"sec_num": null
},
{
"text": "On the other hand, finding gold parses in the BioAMRtest data is much harder: about 75% of Smatch scores get assigned a score of 0.83 or lower and only 1% of gold parses are predicted as perfect (Table 4 ). The estimated probability density function for gold parses (red solid line in Figure 5b) struggles to discriminate itself from the functions corresponding to the flawed parses of the automatic systems. Nevertheless, the prediction score density for gold parses is situated more on the right hand side than most others. In other words, we find that in the out-of-domain data gold parses tend to be assigned above-average scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 203,
"text": "(Table 4",
"ref_id": "TABREF8"
},
{
"start": 285,
"end": 295,
"text": "Figure 5b)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Correlation results",
"sec_num": null
},
{
"text": "To sum up, our observations for the out-ofdomain data stand in some contrast to what we observe for the in-domain data. However, this outcome can be plausibly explained: assuming that the out-of-domain gold parses have some unfamiliar properties, a system that has never seen such parses cannot judge well whether they are gold or not. In fact, it can be interpreted positively that the system hesitates to assign maximum scores to gold parses from a domain in which the model is completely inexperienced. Additionally, bio-medial texts involve difficult concepts, naming conventions and complicated noun phrases which are hard to understand even for non-expert humans (e.g., \"TAK733 led to a decrease in pERK and G1 arrest in most of these melanoma cell lines regardless of their origin, driver oncogenic mutations and in vitro sensitivity to TAK733\".). Taking all this into account, the results for out-of-domain data may be not as bad as they perhaps appear at first glance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation results",
"sec_num": null
},
{
"text": "Our automatic accuracy prediction method naturally lends itself for ranking parser outputs. For any sentence, provided automatic parses by competing systems can be ranked according to the scores predicted by our system. This scenario arises, e.g., when we run several AMR parsers over a large corpus with the aim of selecting the best parse for each sentence in order to collect silver training data. 9 In the worst case, we do not have any prior knowledge about a parser's performance (we may not even know the source of a parse). We use the test partition from LDC2015E86 and BioAMRTest to rank, for each sentence, the automatic candidate parses provided by the different parsers. In LDC2015E86 we assume not to be agnostic about the parsers as their performances on the development data of this data set are known (in terms of their sentence-average F1 Smatch score). Consider that we are given a sentence and three automatic parses. We select the maximum-score parse, where the score is defined by predicted Smatch F1 plus the average Smatch F1 of the parse-producing parser on the development data. As baselines in this scenario we (i) randomly choose a parse from the three options or (ii) always choose the parse of GPLA. On BioAMRTest, however, we have no prior information about the submitted systems. We select from 6 automatic parses for each sentence. Since now we are completely parser agnostic, the baseline is to randomly select a parse from the candidate set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application Study: AMR Parse Ranking",
"sec_num": "5.2"
},
{
"text": "Results The results are displayed in the best parse according to our model's predicted accuracy score improves over all individual parser results: the obtained average Smatch F1 per sentence increases (i) slightly by 0.2 pp. compared to always choosing outputs from GPLA and (ii) observably by 5.7 pp. compared to randomly selecting a parse from the competing system outputs. The difference compared to always choosing GPLA seems negligible which perhaps can be explained by the fact that GPLA has been shown to be on par or better than doubly-blind human annotators. 10 The oracle that always selects the best parse (upper-bound in Table 5 ) shows little room for improvement: it achieves 2.1 pp. Smatch F1 increase compared to our model. This margin is small and further success might also be hampered by peculiarities in the manual annotations. On BioAMRTest, no prior information about the systems is available. Using our model's predicted scores to select from the alternative system outputs, we can boost Smatch F1 by 5.2 pp. compared to randomly selecting a parse. Compared to always selecting the parses of the best submitted system (in-hindsight), we lag behind by 3.9 pp. Since our data comprises outputs from several parsers with varying performance, we can study the performance of our approach in combination with different parsers (Figure 6 ). When only choosing among CAMR and JAMR outputs, on LDC2015E86, our system boosts the F1 by 2.7 pp. compared to randomly selecting a parse, and by 0.6 pp. compared to always choosing the parse from the better system (determined on dev, here: JAMR). Choosing from CAMR and GPLA or JAMR and GPLA makes little difference: in most cases our system selects the GPLA parse and the difference to only choosing GPLA parses is 10 GPLA (Lyu and Titov, 2018) achieves a high 74.4% corpus-level Smatch F1 (primarily news texts), while a prior annotation study (Banarescu et al., 2013) reported doubly blind annotation corpus-level F1 of 0.71 (for web texts).",
"cite_spans": [
{
"start": 568,
"end": 570,
"text": "10",
"ref_id": null
},
{
"start": 1775,
"end": 1777,
"text": "10",
"ref_id": null
},
{
"start": 1783,
"end": 1804,
"text": "(Lyu and Titov, 2018)",
"ref_id": "BIBREF29"
},
{
"start": 1905,
"end": 1929,
"text": "(Banarescu et al., 2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 633,
"end": 640,
"text": "Table 5",
"ref_id": "TABREF10"
},
{
"start": 1345,
"end": 1354,
"text": "(Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Application Study: AMR Parse Ranking",
"sec_num": "5.2"
},
{
"text": "CAMR/GPLA JAMR/GPLA Figure 6 : Using our model to predict the best parse out of two candidate parses, each from a different system.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 28,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "CAMR/JAMR",
"sec_num": null
},
{
"text": "marginal. Moreover, across both test sets, the majority of rankings assigned by our method have positive correlations with the true rankings (Table 6 ): 77% of all assigned rankings have a positive correlation with the true ranking (70% for biomedical). In sum, we can draw two conclusions from this experiment: given a sentence, ranking AMR parser outputs using our accuracy prediction model, on in-domain and out-of-domain unseen data (i) clearly improves performance when non state-of-the-art parsers are applied or if we are not informed about the parsers' performances and (ii) does not worsen results in other cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 150,
"text": "(Table 6",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "CAMR/JAMR",
"sec_num": null
},
{
"text": "In our final case study, we use our accuracy prediction model to predict a ranking over systems. We use our model to rank the unseen submitted system parses of the SemEval-2017 Task 9 (evaluated on BioAMRTest) and SemEval-2016 Task 8 (evaluated on LDC2015R36) according to average predicted F1 Smatch scores. Again, we assume a parser-agnostic setting, meaning we have no prior knowledge of the submitted systems (i.e. we just consider their outputs). In this setting, we do not rank individual parses given a sentence, but rank the system outputs, according to estimated average Smatch F1 per sentence. We evaluate against the final team rankings of the two shared tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application Study: Predict System Ranks",
"sec_num": "5.3"
},
{
"text": "The results are displayed in Table 7 . On BioAMRTest we have a good, albeit non statistically significant correlation with the true team ranking. On the in-domain LDC2015R36 test set we see a significant correlation of \u03c1 = 0.645 (p 1,2 < 0.05). In this shared task, many teams were competitive and differences between the best teams were marginal. For example, in the true ranking, places 1 to 6 achieved between 0.60 and 0.62 Smatch F1. Notably, the first four teams ac-Rank LDC2015R36 Rank BioAMRTest rank r rankr rank r rankr DANGNT --1 3 Oxford --2 1 TMF-2 --3 2 RIGOTRIO --4 5 TMF-1 --5 4 JAMR 7 7 6 6 RIGA 1 4 --Brandeis 2 3 --CU-NLP 3 1 --UCL+Sheffield 4 2 --ICL-HD 5 8 --M2L 6 10 --JAMR-base 8 12 --UofR 9 11 --TMF 10 5 --UMD 11 6 --DynamicPower 12 13 -det. baseline 13 9 -\u03c1 0.645 (p1 = 0.017, p2 = 0.011) 0.771 (p1 = 0.072, p2 = 0.051) Table 7 : True rank r (given corpus-Smatch) and predicted rankr (based on sentence average Smatch computed using our model). p 1 : probability of noncorrelation. p 2 : probability that a randomly produced ranking achieves equal or greater \u03c1 (estimated over 10 6 random rankings). For team names, see fn. 6 & 7.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 36,
"text": "Table 7",
"ref_id": null
},
{
"start": 529,
"end": 807,
"text": "DANGNT --1 3 Oxford --2 1 TMF-2 --3 2 RIGOTRIO --4 5 TMF-1 --5 4 JAMR 7 7 6 6 RIGA 1 4 --Brandeis 2 3 --CU-NLP 3 1 --UCL+Sheffield 4 2 --ICL-HD 5 8 --M2L 6 10 --JAMR-base 8 12 --UofR 9 11 --TMF 10 5 --UMD 11 6 --DynamicPower 12",
"ref_id": "TABREF5"
},
{
"start": 896,
"end": 903,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "cording to the true ranking and the first four teams according to our predicted ranking fall into the same group. This shows that our model successfully assigned high ranks to low error submissions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "We finally perform ablation experiments to evaluate the impact of individual model components. We experiment with five different setups. (i) instead of stacking two Bi-LSTMs, we use only one Bi-LSTM (one-lstm, Table 8 ). (ii) instead of the dependency tree, we feed the words in the order as they occur in the sentence (no-dep). (iii) nopointers: we remove the token-pointers from our model. (iv), instead of using the hierarchical setup, we predict all metrics on the same level (green in Figure 3 , no-HL in Table 8 ) and (v), no-HMTL: we optimize the non-hierarchical model only with respect to Smatch, disregarding the AMR subtasks. Remarkably, the dependency tree greatly helps the model on in-domain data over all measures (-37 total \u2206 without dependencies) but hurts the model on out-of-domain data (+27 total \u2206). A possible explanation is the degradation of the dependency parse quality: bio-medical data not only poses a challenge for our model, but also for the dependency parser. With special regard to the main AMR evaluation measure, Smatch F1, the learned pointer embeddings provide useful input on the indomain test data (-4 \u2206 without pointers). Smatch 78 -1 -1 -4 -3 -2 47 0 +5 +4 +2 -3 Concepts 64 -1 -4 -3 -4 -62 0 +3 +2 0 -Frames 72 0 -5 0 -1 -63 0 +1 +1 - ",
"cite_spans": [],
"ref_spans": [
{
"start": 210,
"end": 217,
"text": "Table 8",
"ref_id": "TABREF14"
},
{
"start": 490,
"end": 498,
"text": "Figure 3",
"ref_id": null
},
{
"start": 510,
"end": 517,
"text": "Table 8",
"ref_id": "TABREF14"
},
{
"start": 1161,
"end": 1298,
"text": "Smatch 78 -1 -1 -4 -3 -2 47 0 +5 +4 +2 -3 Concepts 64 -1 -4 -3 -4 -62 0 +3 +2 0 -Frames 72 0 -5 0 -1 -63 0 +1 +1 -",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Ablation Experiments",
"sec_num": "5.4"
},
{
"text": "AMR parser evaluation with human gold annotation is very costly. Our main contributions in this work are two-fold: Firstly, we introduced the concept of automatic AMR accuracy prediction. Given only an automatic parse and the sentence, from whence it was derived, the goal is to predict evaluation metrics cheaply and possibly at runtime. Secondly, we framed the task as a multiple-output regression task and developed a hierarchical neural model to predict a rich suite of AMR evaluation metrics. We presented three case studies proving (i) the feasibility of automatic AMR accuracy prediction in general (significant correlation with gold scores on unseen indomain and out-of-domain data) and (ii) the applicability of our model in two use cases. In the first study, we ranked different automatic candidate parses per sentence, outperforming the random selection baseline by 5.7 pp. average Smatch F1 (in-domain) and 5.2 pp. (out-of-domain). In the second study, we ranked team submissions to two AMR shared tasks and our method was able to reproduce rankings similar to the true rankings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://gitlab.cl.uni-heidelberg.de/ opitz/quamr",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Creating an AMR graph requires trained linguists and takes on average 8 to 13 minutes, cf.Banarescu et al. (2013)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The other subtasks are: Unlabelled (Smatch after edge label removal), No WSD (Smatch after PropBank sense removal), NS frames (PropBank frame identification without sense), Wikification (entity linking), NER (named entity recognition), Reentrancy (Smatch over re-entrant edges).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "c.f.Groschwitz et al. (2018);Chen and Palmer (2017). wise multiplication, subtraction and addition to both vector representations and concatenate the resulting vectors (\u2297 inFigure 3). The joint AMRdependency representation is further processed by a feed forward layer (FF) with sigmoid activation functions in order to predict, in total, 36 different metrics (green,Figure 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In a self-training scenario, we also could set a threshold of minimum predicted accuracy to select confident parses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been supported by the German Research Foundation (grant no. GRK 1994/1) and the Leibniz Association (grant no. SAS-2015-IDS-LWC) and the Ministry of Science, Research, and Art of Baden-W\u00fcrttemberg. We are grateful to the NVIDIA corporation for donating the GPU used in this research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "We initialize all parameters of the model randomly. Embedding vectors of dimension 128 are drawn from U (0.05, 0.05) and the LSTM weights (neurons: 128) and weights of the feed forward output layers are sampled from a Glorot uniform distribution (Glorot and Bengio, 2010) . For future work, initializing the embedding layer with pre-trained vectors could further increase the performance. In this work, however, we learn all parameters from the given data. We fit our model using Adam (Kingma and Ba, 2014) (learning rate: 0.001) on the training data over 20 epochs with mini batches of size 16. We apply early stopping according to the maximum Pearson's \u03c1 (with regard to Smatch F1) on the development data.2 quantifies the linear relationship between predicted scores (x 1 , ..., x n ) and true scores (y 1 , ..., y n ).",
"cite_spans": [
{
"start": 246,
"end": 271,
"text": "(Glorot and Bengio, 2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Supplemental Material Hyper parameters and weights initialization",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Abstract meaning representation for sembanking",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Banarescu",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Madalina",
"middle": [],
"last": "Georgescu",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Griffitt",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Riga at semeval-2016 task 8: Impact of smatch extensions and character-level neural translation on amr parsing accuracy",
"authors": [
{
"first": "Guntis",
"middle": [],
"last": "Barzdins",
"suffix": ""
},
{
"first": "Didzis",
"middle": [],
"last": "Gosko",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "1143--1147",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1176"
]
},
"num": null,
"urls": [],
"raw_text": "Guntis Barzdins and Didzis Gosko. 2016. Riga at semeval-2016 task 8: Impact of smatch extensions and character-level neural translation on amr pars- ing accuracy. In Proceedings of the 10th Interna- tional Workshop on Semantic Evaluation (SemEval- 2016), pages 1143-1147. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Predicting the performance of parsing with referential translation machines",
"authors": [
{
"first": "Ergun",
"middle": [],
"last": "Biici",
"suffix": ""
}
],
"year": 2016,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "106",
"issue": "1",
"pages": "31--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ergun Biici. 2016. Predicting the performance of parsing with referential translation machines. The Prague Bulletin of Mathematical Linguistics, 106(1):31 -44.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The meaning factory at semeval-2016 task 8: Producing amrs with boxer",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Bjerva",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
},
{
"first": "Hessel",
"middle": [],
"last": "Haagsma",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "1179--1184",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1182"
]
},
"num": null,
"urls": [],
"raw_text": "Johannes Bjerva, Johan Bos, and Hessel Haagsma. 2016. The meaning factory at semeval-2016 task 8: Producing amrs with boxer. In Proceedings of the 10th International Workshop on Semantic Eval- uation (SemEval-2016), pages 1179-1184. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Icl-hd at semeval-2016 task 8: Meaning representation parsing -augmenting amr parsing with a preposition semantic role labeling neural network",
"authors": [
{
"first": "Lauritz",
"middle": [],
"last": "Brandt",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grimm",
"suffix": ""
},
{
"first": "Mengfei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yannick",
"middle": [],
"last": "Versley",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "1160--1166",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1179"
]
},
"num": null,
"urls": [],
"raw_text": "Lauritz Brandt, David Grimm, Mengfei Zhou, and Yan- nick Versley. 2016. Icl-hd at semeval-2016 task 8: Meaning representation parsing -augmenting amr parsing with a preposition semantic role labeling neural network. In Proceedings of the 10th Interna- tional Workshop on Semantic Evaluation (SemEval- 2016), pages 1160-1166. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Dynamicpower at semeval-2016 task 8: Processing syntactic parse trees with a dynamic semantics core",
"authors": [
{
"first": "Alastair",
"middle": [],
"last": "Butler",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "1148--1153",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1177"
]
},
"num": null,
"urls": [],
"raw_text": "Alastair Butler. 2016. Dynamicpower at semeval- 2016 task 8: Processing syntactic parse trees with a dynamic semantics core. In Proceedings of the 10th International Workshop on Semantic Evalua- tion (SemEval-2016), pages 1148-1153. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Oxford at semeval-2017 task 9: Neural amr parsing with pointeraugmented attention",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "914--919",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2157"
]
},
"num": null,
"urls": [],
"raw_text": "Jan Buys and Phil Blunsom. 2017. Oxford at semeval- 2017 task 9: Neural amr parsing with pointer- augmented attention. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 914-919. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Smatch: an evaluation metric for semantic feature structures",
"authors": [
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2013,
"venue": "The Association for Computer Linguistics",
"volume": "",
"issue": "",
"pages": "748--752",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shu Cai and Kevin Knight. 2013. Smatch: an evalua- tion metric for semantic feature structures. In ACL (2), pages 748-752. The Association for Computer Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Combining quality estimation and automatic post-editing to enhance machine translation output",
"authors": [
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2018,
"venue": "AMTA (1)",
"volume": "",
"issue": "",
"pages": "26--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajen Chatterjee, Matteo Negri, Marco Turchi, Fr\u00e9d\u00e9ric Blain, and Lucia Specia. 2018. Combin- ing quality estimation and automatic post-editing to enhance machine translation output. In AMTA (1), pages 26-38. Association for Machine Translation in the Americas.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Unsupervised amr-dependency parse alignment",
"authors": [
{
"first": "Wei-Te",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "558--567",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei-Te Chen and Martha Palmer. 2017. Unsupervised amr-dependency parse alignment. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Vol- ume 1, Long Papers, pages 558-567. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An incremental parser for abstract meaning representation",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Damonte",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "1",
"issue": "",
"pages": "536--546",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for abstract meaning representation. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 536-546. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Detecting dependency parse errors with minimal resources",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dickinson",
"suffix": ""
},
{
"first": "Amber",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 12th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "241--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dickinson and Amber Smith. 2011. Detecting dependency parse errors with minimal resources. In Proceedings of the 12th International Conference on Parsing Technologies, pages 241-252. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Simulating dependencies to improve parse error detection",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dickinson",
"suffix": ""
},
{
"first": "Amber",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th International Workshop on Treebanks and Linguistic Theories (TLT15)",
"volume": "",
"issue": "",
"pages": "76--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dickinson and Amber Smith. 2017. Simu- lating dependencies to improve parse error detec- tion. In Proceedings of the 15th International Work- shop on Treebanks and Linguistic Theories (TLT15), Bloomington, IN, USA, January 20-21, 2017., vol- ume 1779 of CEUR Workshop Proceedings, pages 76-88. CEUR-WS.org.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Text summarization using abstract meaning representation",
"authors": [
{
"first": "Shibhansh",
"middle": [],
"last": "Dohare",
"suffix": ""
},
{
"first": "Harish",
"middle": [],
"last": "Karnick",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shibhansh Dohare and Harish Karnick. 2017. Text summarization using abstract meaning representa- tion. CoRR, abs/1706.01678.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "CMU at semeval-2016 task 8: Graph-based AMR parsing with infinite ramp loss",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Flanigan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2016,
"venue": "SemEval@NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1202--1206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime G. Carbonell. 2016. CMU at semeval-2016 task 8: Graph-based AMR parsing with infinite ramp loss. In SemEval@NAACL-HLT, pages 1202-1206. The Association for Computer Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A discriminative graph-based parser for the abstract meaning representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Flanigan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1426--1436",
"other_ids": {
"DOI": [
"10.3115/v1/P14-1134"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discrim- inative graph-based parser for the abstract mean- ing representation. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426- 1436. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Cu-nlp at semeval-2016 task 8: Amr parsing using lstmbased recurrent neural networks",
"authors": [
{
"first": "William",
"middle": [],
"last": "Foland",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "1197--1201",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1185"
]
},
"num": null,
"urls": [],
"raw_text": "William Foland and James H. Martin. 2016. Cu-nlp at semeval-2016 task 8: Amr parsing using lstm- based recurrent neural networks. In Proceedings of the 10th International Workshop on Semantic Eval- uation (SemEval-2016), pages 1197-1201. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Understanding the difficulty of training deep feedforward neural networks",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "AISTATS",
"volume": "9",
"issue": "",
"pages": "249--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Understand- ing the difficulty of training deep feedforward neural networks. In AISTATS, volume 9 of JMLR Proceed- ings, pages 249-256. JMLR.org.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Ucl+sheffield at semeval-2016 task 8: Imitation learning for amr parsing with an alphabound",
"authors": [
{
"first": "James",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "1167--1172",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1180"
]
},
"num": null,
"urls": [],
"raw_text": "James Goodman, Andreas Vlachos, and Jason Narad- owsky. 2016. Ucl+sheffield at semeval-2016 task 8: Imitation learning for amr parsing with an alpha- bound. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1167-1172. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "AMR dependency parsing with a typed semantic algebra",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Groschwitz",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Lindemann",
"suffix": ""
},
{
"first": "Meaghan",
"middle": [],
"last": "Fowlie",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Groschwitz, Matthias Lindemann, Meaghan Fowlie, Mark Johnson, and Alexander Koller. 2018. AMR dependency parsing with a typed semantic al- gebra. CoRR, abs/1805.11465.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Rigotrio at semeval-2017 task 9: Combining machine learning and grammar engineering for amr parsing and generation",
"authors": [
{
"first": "Normunds",
"middle": [],
"last": "Gruzitis",
"suffix": ""
},
{
"first": "Didzis",
"middle": [],
"last": "Gosko",
"suffix": ""
},
{
"first": "Guntis",
"middle": [],
"last": "Barzdins",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "924--928",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2159"
]
},
"num": null,
"urls": [],
"raw_text": "Normunds Gruzitis, Didzis Gosko, and Guntis Barzdins. 2017. Rigotrio at semeval-2017 task 9: Combining machine learning and grammar engi- neering for amr parsing and generation. In Proceed- ings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 924-928. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Employing oracle confusion for parse quality estimation",
"authors": [
{
"first": "Sambhav",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Bhasha",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Rajeev",
"middle": [],
"last": "Sangal",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "213--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sambhav Jain, Naman Jain, Bhasha Agrawal, and Ra- jeev Sangal. 2015. Employing oracle confusion for parse quality estimation. In Computational Linguis- tics and Intelligent Text Processing, pages 213-226, Cham. Springer International Publishing.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Semantics-based machine translation with hyperedge replacement grammars",
"authors": [
{
"first": "K",
"middle": [],
"last": "Bevan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Bauer",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Hermann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2012,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "1359--1376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bevan K. Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-based machine translation with hyper- edge replacement grammars. In COLING, pages 1359-1376. Indian Institute of Technology Bombay.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Quality estimation of english-hindi machine translation systems",
"authors": [
{
"first": "Nisheeth",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Iti",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Hemant",
"middle": [],
"last": "Darbari",
"suffix": ""
},
{
"first": "Ajai",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies, ICTCS '16",
"volume": "53",
"issue": "",
"pages": "1--53",
"other_ids": {
"DOI": [
"10.1145/2905055.2905259"
]
},
"num": null,
"urls": [],
"raw_text": "Nisheeth Joshi, Iti Mathur, Hemant Darbari, and Ajai Kumar. 2016. Quality estimation of english-hindi machine translation systems. In Proceedings of the Second International Conference on Informa- tion and Communication Technology for Competi- tive Strategies, ICTCS '16, pages 53:1-53:5, New York, NY, USA. ACM.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Predictorestimator: Neural quality estimation based on target word prediction for machine translation",
"authors": [
{
"first": "Hyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Hun-Young",
"middle": [],
"last": "Jung",
"suffix": ""
},
{
"first": "Hongseok",
"middle": [],
"last": "Kwon",
"suffix": ""
},
{
"first": "Jong-Hyeok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Seung-Hoon",
"middle": [],
"last": "Na",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3109480"
]
},
"num": null,
"urls": [],
"raw_text": "Hyun Kim, Hun-Young Jung, Hongseok Kwon, Jong- Hyeok Lee, and Seung-Hoon Na. 2017. Predictor- estimator: Neural quality estimation based on target word prediction for machine translation.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Neural amr: Sequence-to-sequence models for parsing and generation",
"authors": [
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
},
{
"first": "Srinivasan",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "146--157",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1014"
]
},
"num": null,
"urls": [],
"raw_text": "Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural amr: Sequence-to-sequence models for parsing and gen- eration. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 146-157. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Toward abstractive summarization using semantic representations",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Flanigan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Norman",
"middle": [],
"last": "Sadeh",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1077--1086",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1114"
]
},
"num": null,
"urls": [],
"raw_text": "Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A. Smith. 2015. Toward abstrac- tive summarization using semantic representations. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, pages 1077-1086. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Amr parsing as graph prediction with latent alignment",
"authors": [
{
"first": "Chunchuan",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "397--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chunchuan Lyu and Ivan Titov. 2018. Amr parsing as graph prediction with latent alignment. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 397-407. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Semeval-2016 task 8: Meaning representation parsing",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "1063--1073",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan May. 2016. Semeval-2016 task 8: Mean- ing representation parsing. In Proceedings of the 10th International Workshop on Semantic Evalua- tion (SemEval-2016), pages 1063-1073, San Diego, California. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Abstract meaning representation parsing and generation",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Jay",
"middle": [],
"last": "Priyadarshi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "9",
"issue": "",
"pages": "536--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan May and Jay Priyadarshi. 2017. Semeval- 2017 task 9: Abstract meaning representation parsing and generation. In Proceedings of the 11th International Workshop on Semantic Evalu- ation (SemEval-2017), page 536-545, Vancouver, Canada. Association for Computational Linguistics, Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Addressing a question answering challenge by combining statistical methods with inductive rule learning and reasoning",
"authors": [
{
"first": "Arindam",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Chitta",
"middle": [],
"last": "Baral",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2779--2785",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arindam Mitra and Chitta Baral. 2016. Addressing a question answering challenge by combining statis- tical methods with inductive rule learning and rea- soning. In Proceedings of the Thirtieth AAAI Con- ference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA., pages 2779-2785. AAAI Press.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "UIT-DANGNT-CLNLP at semeval-2017 task 9: Building scientific concept fixing patterns for improving CAMR",
"authors": [
{
"first": "Khoa",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Dang",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017",
"volume": "",
"issue": "",
"pages": "909--913",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2156"
]
},
"num": null,
"urls": [],
"raw_text": "Khoa Nguyen and Dang Nguyen. 2017. UIT- DANGNT-CLNLP at semeval-2017 task 9: Build- ing scientific concept fixing patterns for improving CAMR. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017, Vancouver, Canada, August 3-4, 2017, pages 909-913. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The meaning factory at semeval-2017 task 9: Producing amrs with neural semantic parsing",
"authors": [
{
"first": "Rik",
"middle": [],
"last": "Van Noord",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "929--933",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2160"
]
},
"num": null,
"urls": [],
"raw_text": "Rik van Noord and Johan Bos. 2017a. The mean- ing factory at semeval-2017 task 9: Producing amrs with neural semantic parsing. In Proceedings of the 11th International Workshop on Semantic Evalua- tion (SemEval-2017), pages 929-933. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Neural semantic parsing by character-based translation: Experiments with abstract meaning representations",
"authors": [
{
"first": "Rik",
"middle": [],
"last": "Van Noord",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics in the Netherlands Journal",
"volume": "7",
"issue": "",
"pages": "93--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rik van Noord and Johan Bos. 2017b. Neural semantic parsing by character-based translation: Experiments with abstract meaning representations. Computa- tional Linguistics in the Netherlands Journal, 7:93- 108.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "The proposition bank: An annotated corpus of semantic roles",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Comput. Linguist",
"volume": "31",
"issue": "1",
"pages": "71--106",
"other_ids": {
"DOI": [
"10.1162/0891201053630264"
]
},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Comput. Linguist., 31(1):71-106.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Uofr at semeval-2016 task 8: Learning synchronous hyperedge replacement grammar for amr parsing",
"authors": [
{
"first": "Xiaochang",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "1185--1189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaochang Peng and Daniel Gildea. 2016. Uofr at semeval-2016 task 8: Learning synchronous hyper- edge replacement grammar for amr parsing. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1185- 1189.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "M2L at semeval-2016 task 8: AMR parsing with neural networks",
"authors": [
{
"first": "Yevgeniy",
"middle": [],
"last": "Puzikov",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016",
"volume": "",
"issue": "",
"pages": "1154--1159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yevgeniy Puzikov, Daisuke Kawahara, and Sadao Kurohashi. 2016. M2L at semeval-2016 task 8: AMR parsing with neural networks. In Proceed- ings of the 10th International Workshop on Seman- tic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016, pages 1154- 1159. The Association for Computer Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Clip$@$umd at semeval-2016 task 8: Parser for abstract meaning representation using learning to search",
"authors": [
{
"first": "Sudha",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Yogarshi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "1190--1196",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1184"
]
},
"num": null,
"urls": [],
"raw_text": "Sudha Rao, Yogarshi Vyas, Hal Daum\u00e9 III, and Philip Resnik. 2016. Clip$@$umd at semeval-2016 task 8: Parser for abstract meaning representation us- ing learning to search. In Proceedings of the 10th International Workshop on Semantic Evalua- tion (SemEval-2016), pages 1190-1196. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Automatic prediction of parser accuracy",
"authors": [
{
"first": "Sujith",
"middle": [],
"last": "Ravi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "887--896",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sujith Ravi, Kevin Knight, and Radu Soricut. 2008. Automatic prediction of parser accuracy. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 887-896. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Sprucing up the trees -error detection in treebanks",
"authors": [
{
"first": "Ines",
"middle": [],
"last": "Rehbein",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "107--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ines Rehbein and Josef Ruppenhofer. 2018. Spruc- ing up the trees -error detection in treebanks. In Proceedings of the 27th International Conference on Computational Linguistics, pages 107-118. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Multivariate density estimation and visualization",
"authors": [
{
"first": "W",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scott",
"suffix": ""
}
],
"year": 2012,
"venue": "Handbook of computational statistics",
"volume": "",
"issue": "",
"pages": "549--569",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David W Scott. 2012. Multivariate density estimation and visualization. In Handbook of computational statistics, pages 549-569. Springer.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Combining quality prediction and system selection for improved automatic translation output",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Sushant",
"middle": [],
"last": "Narsale",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "163--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Soricut and Sushant Narsale. 2012. Combining quality prediction and system selection for improved automatic translation output. In Proceedings of the Seventh Workshop on Statistical Machine Transla- tion, pages 163-170. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Quest -a translation quality estimation framework",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Kashif",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "G",
"middle": [
"C"
],
"last": "Jose",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "De Souza",
"suffix": ""
},
{
"first": "Fondazione Bruno",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kessler",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51th Conference of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Kashif Shah, Jose G. C. De Souza, Trevor Cohn, and Fondazione Bruno Kessler. 2013. Quest -a translation quality estimation framework. In In Proceedings of the 51th Conference of the Association for Computational Linguistics (ACL), Demo Session.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Using domain similarity for performance estimation",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Van Asch",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "31--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Van Asch and Walter Daelemans. 2010. Us- ing domain similarity for performance estimation. In Proceedings of the 2010 Workshop on Do- main Adaptation for Natural Language Processing, DANLP 2010, pages 31-36, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Camr at semeval-2016 task 8: An extended transition-based amr parser",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Xiaoman",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "1173--1178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuan Wang, Sameer Pradhan, Xiaoman Pan, Heng Ji, and Nianwen Xue. 2016a. Camr at semeval-2016 task 8: An extended transition-based amr parser. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1173- 1178, San Diego, California. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Camr at semeval-2016 task 8: An extended transition-based amr parser",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Xiaoman",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "1173--1178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuan Wang, Sameer Pradhan, Xiaoman Pan, Heng Ji, and Nianwen Xue. 2016b. Camr at semeval-2016 task 8: An extended transition-based amr parser. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1173- 1178.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Boosting transition-based amr parsing with refined actions and auxiliary analyzers",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "857--862",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015a. Boosting transition-based amr parsing with refined actions and auxiliary analyzers. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 2: Short Papers), pages 857-862, Beijing, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "A transition-based algorithm for amr parsing",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "366--375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015b. A transition-based algorithm for amr pars- ing. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 366-375, Denver, Colorado. Asso- ciation for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Predicted (y-axis) & gold (x-axis) Smatch F1."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Probability density function estimations for predicted F1 Smatch scores using Scott's method (Scott, 2012) with respect to candidate parses from different systems."
},
"TABREF5": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Statistics of data sets used in this work."
},
"TABREF7": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Pearson correlation coefficient (\u03c1) over various metrics and across domains. Explanations of the metrics and AMR subtasks are in Section \u00a73 and fn. 3"
},
"TABREF8": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Various percentiles of Smatch F1 predictions for gold graphs."
},
"TABREF10": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>: Results (sentence averages) of different AMR</td></tr><tr><td>parsing (bottom part) and ranking (top part) systems on</td></tr><tr><td>two test sets. Upper part: results when selecting from</td></tr><tr><td>alternative parses: lower-bound (upper-bound): oracle</td></tr><tr><td>selecting the worst (best) AMR parse; ours: results</td></tr><tr><td>when selecting the best parse according to our models'</td></tr><tr><td>accuracy prediction (hierarchical model).</td></tr></table>",
"text": ""
},
"TABREF11": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>. For</td></tr></table>",
"text": ""
},
"TABREF12": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>: Results of different parse-ranking systems</td></tr><tr><td>with respect to sentence-level parse rankings.\u03c1: av-</td></tr><tr><td>erage Pearson-r on a sentence level. %pos: ratio of</td></tr><tr><td>predicted rankings with positive \u03c1 to gold ranking.</td></tr></table>",
"text": ""
},
"TABREF14": {
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "\u03c1 correlation (F1) differences over different setups(columns), test sets (out-of-domain, in-domain) and subtasks (rows). \u00b1x: plus and minus x pp.\u03c1."
}
}
}
}