Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S19-2012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:45:40.471390Z"
},
"title": "CUNY-PKU Parser at SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA",
"authors": [
{
"first": "Weimin",
"middle": [],
"last": "Lyu",
"suffix": "",
"affiliation": {
"laboratory": "Graduate Center and Hunter College",
"institution": "City University of New York",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Sheng",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Abdul",
"middle": [],
"last": "Rafae Khan",
"suffix": "",
"affiliation": {
"laboratory": "Graduate Center and Hunter College",
"institution": "City University of New York",
"location": {}
},
"email": ""
},
{
"first": "Shengqiang",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Weiwei",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jia",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "Graduate Center and Hunter College",
"institution": "City University of New York",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the systems of the CUNY-PKU team in \"SemEval 2019 Task 1: Cross-lingual Semantic Parsing with UCCA\" 1. We introduce a novel method by applying a cascaded MLP and BiLSTM model. Then, we ensemble multiple system-outputs by reparsing. Our system won the second places in German-20K-Closed track, and third place in English-20K-Closed track.",
"pdf_parse": {
"paper_id": "S19-2012",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the systems of the CUNY-PKU team in \"SemEval 2019 Task 1: Cross-lingual Semantic Parsing with UCCA\" 1. We introduce a novel method by applying a cascaded MLP and BiLSTM model. Then, we ensemble multiple system-outputs by reparsing. Our system won the second places in German-20K-Closed track, and third place in English-20K-Closed track.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We participate in Cross-lingual Semantic Parsing at SemEval 2019, and our submission systems are based on TUPA (Hershcovich et al., 2017a (Hershcovich et al., , 2018 . A shared task summary paper (Hershcovich et al., 2019) by competition organizers summaries the results.",
"cite_spans": [
{
"start": 111,
"end": 137,
"text": "(Hershcovich et al., 2017a",
"ref_id": "BIBREF1"
},
{
"start": 138,
"end": 165,
"text": "(Hershcovich et al., , 2018",
"ref_id": "BIBREF3"
},
{
"start": 196,
"end": 222,
"text": "(Hershcovich et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We built three single parser using BiLSTM (Bidirectional LSTM) and Multi-Layer Perceptron (MLP) with TUPA (Hershcovich et al., 2017a (Hershcovich et al., , 2018 . Most importantly, we introduce a new training method Cascaded BiLSTM by first pretraining the BiLSTM model and then training another MLP model based on pre-trained BiLSTM model. The cascaded BiLSTM parser enhances the parsing accuracy on all tasks. We also complete a Self-Attentive Constituency Parser (Kitaev and Klein, 2018a,b) as comparison. Finally, we ensemble different parsers with a reparsing strategy (Sagae and Lavie, 2006) . In particular, we introduce an algorithm based on dynamic programming to perform inference for the UCCA representation. This decoder can also be utilized as a core engine for a single parser.",
"cite_spans": [
{
"start": 106,
"end": 132,
"text": "(Hershcovich et al., 2017a",
"ref_id": "BIBREF1"
},
{
"start": 133,
"end": 160,
"text": "(Hershcovich et al., , 2018",
"ref_id": "BIBREF3"
},
{
"start": 466,
"end": 493,
"text": "(Kitaev and Klein, 2018a,b)",
"ref_id": null
},
{
"start": 574,
"end": 597,
"text": "(Sagae and Lavie, 2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We will describe our systems in detail, including three single parsers in Section 2 and a voter in Section 3. We focus on two novel technical contributions: the Cascaded BiLSTM model and the Reparsing strategy. In Section 4 we will present experimental setup and results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The TUPA parser (Hershcovich et al., 2017a) builds on discontinuous constituency and dependency graph parsing and makes some improvements especially for the UCCA representation. The English parsing is based on Hershcovich et al. (2017a) , while French and German parsing is based on Hershcovich et al. (2018) .",
"cite_spans": [
{
"start": 16,
"end": 43,
"text": "(Hershcovich et al., 2017a)",
"ref_id": "BIBREF1"
},
{
"start": 210,
"end": 236,
"text": "Hershcovich et al. (2017a)",
"ref_id": "BIBREF1"
},
{
"start": 283,
"end": 308,
"text": "Hershcovich et al. (2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TUPA Parsers",
"sec_num": "2.1"
},
{
"text": "It has been shown that the choice of model plays an important role in transition-based parsing (Hershcovich et al., 2017b) . For TUPA, we built parsers with different models: MLP, BiL-STM, and also train MLP based on BiLSTM, viz. Cascaded BiLSTM. The three single parsers are described as the following:",
"cite_spans": [
{
"start": 95,
"end": 122,
"text": "(Hershcovich et al., 2017b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TUPA Parsers",
"sec_num": "2.1"
},
{
"text": "The MLP parser (Hershcovich et al., 2017b ) applies a feedforward neural network with dense embedding features to predict optimal transitions given particular parser states. This parser adopts a similar architecture to Chen and Manning (2014) .",
"cite_spans": [
{
"start": 15,
"end": 41,
"text": "(Hershcovich et al., 2017b",
"ref_id": "BIBREF2"
},
{
"start": 219,
"end": 242,
"text": "Chen and Manning (2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TUPA Parsers",
"sec_num": "2.1"
},
{
"text": "The BiLSTM parser (Hershcovich et al., 2018) applies a bidirectional LSTM to learn contextualized vector-based representations for words that are then utilized for encoding a parser state, similarly to Kiperwasser and Goldberg (2016) . The red box in Figure 1 shows the architecture of BiLSTM model, indicating that the representations after BiLSTM are fed into a Multiple-layer perceptron.",
"cite_spans": [
{
"start": 202,
"end": 233,
"text": "Kiperwasser and Goldberg (2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 251,
"end": 259,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "TUPA Parsers",
"sec_num": "2.1"
},
{
"text": "The Cascaded BiLSTM parser combines the above two parsing models, which contains a multistage training process. First, we use BiLSTM TUPA model to train 100 epochs, then retrain the model using MLP TUPA model for another 50 epochs. It's really interesting that the performances remains as good as BiLSTM TUPA model, even slightly better. Figure 1 shows the architecture of Cascaded BiLSTM model.",
"cite_spans": [],
"ref_spans": [
{
"start": 338,
"end": 346,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "TUPA Parsers",
"sec_num": "2.1"
},
{
"text": "We also built a Constituency Parser as comparison, which uses a self-attentive architecture that makes explicit the manner considering information propagating between different locations in the sentences (Kitaev and Klein, 2018a,b) . The constituency parser uses parsing tree structures as input and output. Therefore, we convert the phrase structure tree format into UCCA XML formation and vice versa.",
"cite_spans": [
{
"start": 204,
"end": 231,
"text": "(Kitaev and Klein, 2018a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase Constituency Parser",
"sec_num": "2.2"
},
{
"text": "The reparsing system (voter) takes multiple single parser (as in Section 2) results as input and produces a single, hopefully, improved UCCA graph as output. Briefly, each input UCCA graph is encoded to a chart of scores for standard CKY decoding. In this step, we utilize a number of auxiliary labels to encode remote edges and discontinuous constructions. These scores are summed up to get a new chart, which is used for CKY decoding for an immediate tree representation as the voting result. An immediate tree is then enhanced with reference relationships. Finally, a UCCA graph is built via interpreting auxiliary labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reparsing System",
"sec_num": "3"
},
{
"text": "Span representation Graph nodes in a UCCA graph naturally create a hierarchical structure through the use of primary edges. Following this tree structure, we give the definition of span of nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reparsing System",
"sec_num": "3"
},
{
"text": "Definition 1. The span of node x is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reparsing System",
"sec_num": "3"
},
{
"text": "1. empty if x is an implicit node; 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reparsing System",
"sec_num": "3"
},
{
"text": "[p, p+1) if x is a leaf node but not an implicit node, where p is the position of the lexical unit corresponding to x;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reparsing System",
"sec_num": "3"
},
{
"text": "3. the union of spans of x's children, otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reparsing System",
"sec_num": "3"
},
{
"text": "Assuming that each span of nodes is consecutive (we will deal with nonconsecutive spans in Section 3). We encode the label of edge from x's parent to x as the label of span of x. If there are some implicit nodes in x's children, the labels of edges from x to them are also encoded by the label of the span of x. If the span of x is the same as x's parent, the label of this span will be encoded ordered. This process is well-defined due to the acyclic graph structure. Each parser is assigned a weight to indicate its contribution to reparsing. The spans with labels encoded from a UCCA graph are assigned the same score according to which parser they come from. Thus, there is a set of scored spans for each UCCA graph. Following the parsing literature, we call this set a chart. We merge multiple charts produced by different parsers to a single chart simply by adding the corresponding scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reparsing System",
"sec_num": "3"
},
{
"text": "Handling Remote Edges A remote edge with label L from node x to node y is equal to a primary edge with label L from x to an implicit node, which is referred to node y. Hence, if we can find the relationships of references, the remote edges are able to be recovered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reparsing System",
"sec_num": "3"
},
{
"text": "Since all primary edges from nodes to their parent are encoded in labels of spans, each node could be represented as part of the label of a span. We encode each reference of a remote edge as a pair of two nodes with a score. After building all primary edges through dynamic programming, we search for available references with the maximum score in each implicit node greedily and leverage these references to recover remote edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reparsing System",
"sec_num": "3"
},
{
"text": "Handling Discontinuous Spans Discontinuous spans are removed by repeating the following steps: Step 1. Find a node x with a nonconsecutive span with the minimum starting point and minimum height, supposed its consecutive sub-span with minimum starting point is [a, b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reparsing System",
"sec_num": "3"
},
{
"text": "Step 2. Find a node y with a consecutive span with starting point b and maximum height, supposed the primary edge from y's parent to y is e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reparsing System",
"sec_num": "3"
},
{
"text": "Step 3. Create a node z with a special type MIR-ROR and create a primary edge with the label of e from y's parent to z. Remove the primary edge e and create a primary edge with a special label FAKE from x to y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reparsing System",
"sec_num": "3"
},
{
"text": "After each iteration, the span of y is added to x, and the sum of the length of nonconsecutive spans decreases. Each primary edge in an original UCCA graph can only be removed once. To that end, the running time of this algorithm is linear in the number of lexical units. If all references of MIRROR nodes are correctly predicted, the expected UCCA graph will be obtained. In this way, remote edges can be handled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reparsing System",
"sec_num": "3"
},
{
"text": "The semantic parsing task is carried out in three languages: English, German and French, including three training data sets and parallel four test data sets. For English data, we use the Wikipedia UCCA corpus (henceforth Wiki) as training and development data, testing on English UCCA Wiki corpus as the in-domain test. Meanwhile, English UCCA 20K Leagues corpus serves as an out-ofdomain test set. For German data, we use 20K Leagues corpus for train, development, and test sets. For French data, they provide only limited training data, along with development and test data sets. Table 1 shows the sentences number of data sets for all three languages. For both closed track and open track, we only use official train data provided by organizer 2 . En-Wiki 4113 514 515 En-20K 0 0 492 Ge-20K 5211 651 632 Fr-20K 15 238 239 Table 1 : Sentence number in training, dev, and test sets for English, German and French UCCA data sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 582,
"end": 589,
"text": "Table 1",
"ref_id": null
},
{
"start": 751,
"end": 846,
"text": "En-Wiki 4113 514 515 En-20K 0 0 492 Ge-20K 5211 651 632 Fr-20K 15 238 239 Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Statistics",
"sec_num": "4.1"
},
{
"text": "We build MLP and BiLSTM systems using TUPA (Hershcovich et al., 2017b) . For Cascaded BiLSTM model, we retrain the MLP model based on the pre-trained BiLSTM model, which forms a cascaded BiSLTM. For closed tracks, we train models based on the gold-standard UCCA annotation from official resources. We train all three models and ensemble the results based on the voting system. For open tracks, We use the same training data as closed tracks, but only train on BiLSTM model. Table 2 shows the results for four models in closed tracks. The italicized values are our official submission. However, we have made some improvement after the Evaluation Phrase, and the bold results are our best results. The first three models are single systems and the fourth model (Ensembled) ensembles different frameworks by reparsing systems. The baseline represents the baseline that competition provides for reference.",
"cite_spans": [
{
"start": 43,
"end": 70,
"text": "(Hershcovich et al., 2017b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 474,
"end": 481,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "TUPA Parsers",
"sec_num": "4.2"
},
{
"text": "By using feedforward Neural Network and embedding features, MLP models get the lowest scores. BiLSTM models achieve better results than MLP models in F1 scores, both in the indomain and out-of-domain data sets. However, the combination of BiSLTM and MLP models (Cascaded BiLSTM model) performs best among the three models in all results of single systems. For MLP model and BiLSTM model, we only train 100 epoches. For Cascaded BiLSTM model, we first train 100 epoches for BiLSTM model, then another 50 epoches for MLP model. Our in-house reparsing system ensembles the above parsers as described in Section 3. We can see that ensemble results are better at closed track.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TUPA Parsers",
"sec_num": "4.2"
},
{
"text": "Our submission systems mainly contain a BiL-STM, an MLP, and a cascaded BiLSTM parser, as well as a voted system of above. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "5"
},
{
"text": "https://competitions.codalab.org/ competitions/19160",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 740-750.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A transition-based directed acyclic graph parser for ucca",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "1127--1138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2017a. A transition-based directed acyclic graph parser for ucca. In Proc. of ACL, pages 1127-1138.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A transition-based directed acyclic graph parser for ucca",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.00552"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2017b. A transition-based directed acyclic graph parser for ucca. arXiv preprint arXiv:1704.00552.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multitask parsing across semantic representations",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.00287"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2018. Multitask parsing across semantic representa- tions. arXiv preprint arXiv:1805.00287.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semeval 2019 task 1: Cross-lingual semantic parsing with ucca",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Zohar",
"middle": [],
"last": "Aizenbud",
"suffix": ""
},
{
"first": "Leshem",
"middle": [],
"last": "Choshen",
"suffix": ""
},
{
"first": "Elior",
"middle": [],
"last": "Sulem",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.02953"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Hershcovich, Zohar Aizenbud, Leshem Choshen, Elior Sulem, Ari Rappoport, and Omri Abend. 2019. Semeval 2019 task 1: Cross-lingual semantic parsing with ucca. arXiv preprint arXiv:1903.02953.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Simple and accurate dependency parsing using bidirectional lstm feature representations",
"authors": [
{
"first": "Eliyahu",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "313--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- ple and accurate dependency parsing using bidirec- tional lstm feature representations. Transactions of the Association for Computational Linguistics, 4:313-327.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Constituency parsing with a self-attentive encoder",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Kitaev",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.01052"
]
},
"num": null,
"urls": [],
"raw_text": "Nikita Kitaev and Dan Klein. 2018a. Constituency parsing with a self-attentive encoder. arXiv preprint arXiv:1805.01052.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multilingual constituency parsing with self-attention and pretraining",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Kitaev",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.11760"
]
},
"num": null,
"urls": [],
"raw_text": "Nikita Kitaev and Dan Klein. 2018b. Multilingual constituency parsing with self-attention and pre- training. arXiv preprint arXiv:1812.11760.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Parser combination by reparsing",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Sagae and Alon Lavie. 2006. Parser combination by reparsing. In Proceedings of the Human Lan- guage Technology Conference of the NAACL, Com- panion Volume: Short Papers.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Illustration of the multi-stage Cascaded BiL-STM model. Top: parser state. Bottom: BiLTSM with two MLP architectures. The red box represents BiL-STM(Hershcovich et al., 2018), and the blue box represents retraining a MLP model after implementing the BiLSTM architecture."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Remove nonconsecutive spans"
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "F1 scores for closed tracks in SemEval Task 1 2019 competition. The italic text represents our official submission in competition and the bold text represents our best F1 scores.Contributions and AcknowledgementsWeimin Lyu: built all TUPA Parsers, a selfattentive Parser, convert UCCA graph as constituency tree, verify the voting systems, and draft the paper. Sheng Huang and Shengqiang Zhang: built the reparsing system and UCCA-Dependency graph transformer. Abdul Rafae Khan: built cross-lingual parsers by generating synthetic data with machine translation. Weiwei Sun: extensively supervised PKU team, and Jia Xu: closely supervised CUNY team, in algorithms and experiments. We thank the initial work of Mark Perelman. This research was partially funded by National Science Foundation (NSF) Award No. 1747728 and National Science Foundation of China (NSFC) Award No. 61772036 and 61331011 and partially supported by the Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology) and the Computer Science Department at CUNY Graduate Center as well as CUNY Hunter College.",
"html": null,
"content": "<table/>"
}
}
}
}