Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S14-2006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:33:31.276976Z"
},
"title": "SemEval-2014 Task 6: Supervised Semantic Parsing of Robotic Spatial Commands",
"authors": [
{
"first": "Kais",
"middle": [],
"last": "Dukes",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Leeds",
"location": {
"postCode": "LS2 9JT",
"settlement": "Leeds",
"country": "United Kingdom"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "SemEval-2014 Task 6 aims to advance semantic parsing research by providing a high-quality annotated dataset to compare and evaluate approaches. The task focuses on contextual parsing of robotic commands, in which the additional context of spatial scenes can be used to guide a parser to control a robot arm. Six teams submitted systems using both rule-based and statistical methods. The best performing (hybrid) system scored 92.5% and 90.5% for parsing with and without spatial context. However, the best performing statistical system scored 87.35% and 60.84% respectively, indicating that generalized understanding of commands given to a robot remains challenging, despite the fixed domain used for the task. 'Move the pyramid on the blue cube on the gray one.' Figure 1: Example scene with a contextual spatial command from the Robot Commands Treebank.",
"pdf_parse": {
"paper_id": "S14-2006",
"_pdf_hash": "",
"abstract": [
{
"text": "SemEval-2014 Task 6 aims to advance semantic parsing research by providing a high-quality annotated dataset to compare and evaluate approaches. The task focuses on contextual parsing of robotic commands, in which the additional context of spatial scenes can be used to guide a parser to control a robot arm. Six teams submitted systems using both rule-based and statistical methods. The best performing (hybrid) system scored 92.5% and 90.5% for parsing with and without spatial context. However, the best performing statistical system scored 87.35% and 60.84% respectively, indicating that generalized understanding of commands given to a robot remains challenging, despite the fixed domain used for the task. 'Move the pyramid on the blue cube on the gray one.' Figure 1: Example scene with a contextual spatial command from the Robot Commands Treebank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semantic parsers analyze sentences to produce formal meaning representations that are used for the computational understanding of natural language. Recently, state-of-the-art semantic parsing methods have used for a variety of applications, including question answering (Kwiatkowski et al., 2013; Krishnamurthy and Mitchell, 2012) , dialog systems (Artzi and Zettlemoyer, 2011) , entity relation extraction (Kate and Mooney, 2010) and robotic control (Tellex, 2011; Kim and Mooney, 2012) .",
"cite_spans": [
{
"start": 270,
"end": 296,
"text": "(Kwiatkowski et al., 2013;",
"ref_id": "BIBREF20"
},
{
"start": 297,
"end": 330,
"text": "Krishnamurthy and Mitchell, 2012)",
"ref_id": "BIBREF18"
},
{
"start": 348,
"end": 377,
"text": "(Artzi and Zettlemoyer, 2011)",
"ref_id": "BIBREF0"
},
{
"start": 407,
"end": 430,
"text": "(Kate and Mooney, 2010)",
"ref_id": "BIBREF14"
},
{
"start": 466,
"end": 487,
"text": "Kim and Mooney, 2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Different parsers can be distinguished by the level of supervision they require during training. Fully supervised training typically requires an annotated dataset that maps natural language (NL) to a formal meaning representation such as logical form. However, because annotated data is often not available, a recent trend in semantic parsing research has been to eschew supervised training in favour of either unsupervised or weakly-supervised methods that utilize additional information. For example, Berant and Liang (2014) use a dataset of 5,810 questionanswer pairs without annotated logical forms to induce a parser for a question-answering system. In comparison, Poon (2013) converts NL questions into formal queries via indirect supervision through database interaction.",
"cite_spans": [
{
"start": 670,
"end": 681,
"text": "Poon (2013)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In contrast to previous work, the shared task described in this paper uses the Robot Commands Treebank (Dukes, 2013a) , a new dataset made available for supervised semantic parsing. The chosen domain is robotic control, in which NL commands are given to a robot arm used to manipulate shapes on an 8 x 8 game board. Despite the fixed domain, the task is challenging as correctly parsing commands requires understanding spatial context. For example, the command in Figure 1 may have several plausible interpretations, given different board configurations.",
"cite_spans": [
{
"start": 103,
"end": 117,
"text": "(Dukes, 2013a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 464,
"end": 472,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task is inspired by the classic AI system SHRLDU, which responded to NL commands to control a robot for a similar game board (Winograd, 1972 ), although that system is reported to not have generalized well (Dreyfus, 2009; Mitkov, 1999) . More recent research in command understanding has focused on parsing jointly with grounding, the process of mapping NL descriptions of entities within an environment to a semantic representation. Previous work includes Tellex et al. 2011, who develop a small corpus of commands for a simulated fork lift robot, with grounding performed using a factor graph. Similarly, Kim and Mooney (2012) perform joint parsing and grounding using a corpus of navigation commands. In contrast, this paper focuses on parsing using additional situational context for disambiguation and by using a larger NL dataset, in comparison to previous robotics research.",
"cite_spans": [
{
"start": 129,
"end": 144,
"text": "(Winograd, 1972",
"ref_id": "BIBREF32"
},
{
"start": 210,
"end": 225,
"text": "(Dreyfus, 2009;",
"ref_id": "BIBREF5"
},
{
"start": 226,
"end": 239,
"text": "Mitkov, 1999)",
"ref_id": "BIBREF23"
},
{
"start": 611,
"end": 632,
"text": "Kim and Mooney (2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the remainder of this paper, we describe the task, the dataset and the metrics used for evaluation. We then compare the approaches used by participant systems and conclude with suggested improvements for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The long term research goal encouraged by the task is to develop a system that will robustly execute NL robotic commands. In general, this is a highly complex problem involving computational processing of language, spatial reasoning, contextual awareness and knowledge representation. To simplify the problem, participants were provided with additional tools and resources, allowing them to focus on developing a semantic parser for a fixed domain that would fit into an existing component architecture. Figure 2 shows how these components interact.",
"cite_spans": [],
"ref_spans": [
{
"start": 504,
"end": 512,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "Semantic parser: Systems submitted by participants are semantic parsers that accept an NL command as input, mapping this to a formal Robot Control Language (RCL), described further in section 3.3. The Robot Commands Treebank used for the both training and evaluation is an annotated corpus that pairs NL commands with contextual RCL statements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "A spatial planner is provided as an open Java API 1 . Commands in the treebank are specified in the context of spatial scenes. By interfacing with the planner, participant systems have access to this additional information. For example, given an RCL fragment for the expression 'the red cube on the blue block', the planner will ground the entity, returning a list of zero or more board coordinates corresponding to possible matches. The planner also validates commands to determine if they are compatible with spatial context. It can therefore be used to constrain the search space of possible parses, as well as enabling early resolution of attachment ambiguity during parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spatial planner:",
"sec_num": null
},
{
"text": "The simulated environment consists of an 8 x 8 board that can hold prisms and cubes which occur in eight different colors. The robot's gripper can move to any discrete position within an 8 x 8 x 8 space above the board. The planner uses the simulator to enforce physical laws within the game. For example, a block cannot remain unsupported in empty space due to gravity. Similarly, prisms cannot lie below other block types. In the integrated system, the parser uses the planner for context, then provides the final RCL statement to the simulator which executes the command by moving the robot arm to update the board.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robotic simulator:",
"sec_num": null
},
{
"text": "For the shared task, 3,409 sentences were selected from the treebank. This data size compares with related corpora used for semantic parsing such as the ATIS (Zettlemoyer and Collins, 2007) , GeoQuery (Kate et al., 2005) , Jobs (Tang and Mooney, 2001 ) and RoboCup (Kuhlmann et al., 2004) datasets, consisting of 4,978; 880; 640 and 300 sentences respectively.",
"cite_spans": [
{
"start": 158,
"end": 189,
"text": "(Zettlemoyer and Collins, 2007)",
"ref_id": "BIBREF33"
},
{
"start": 201,
"end": 220,
"text": "(Kate et al., 2005)",
"ref_id": "BIBREF15"
},
{
"start": 228,
"end": 250,
"text": "(Tang and Mooney, 2001",
"ref_id": "BIBREF29"
},
{
"start": 265,
"end": 288,
"text": "(Kuhlmann et al., 2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.1"
},
{
"text": "The treebank was developed via a game with a purpose (www.TrainRobots.com), in which players were shown before and after configurations and asked to give a corresponding command to a hypothetical robot arm. To make the game more competitive and to promote data quality, players rated each other's sentences and were rewarded with points for accurate entries (Dukes, 2013b) .",
"cite_spans": [
{
"start": 358,
"end": 372,
"text": "(Dukes, 2013b)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.1"
},
{
"text": "In total, over 10,000 commands were collected through the game. During an offline annotation phase, sentences were manually mapped to RCL. However, due to the nature of the game, players were free to enter arbitrarily complex sentences to describe moves, not all of which could be represented by RCL. In addition, some commands were syntactically well-formed, but not compatible with the corresponding scenes. The 3,409 commands selected for the task had RCL statements that were both understood by the planner (sequence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.2"
},
{
"text": "(event: (action: take) (entity: (id: 1) (color: cyan) (type: prism) (spatial-relation: (relation: above) (entity: (color: white) (type: cube))))) (event:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.2"
},
{
"text": "(action: drop) (entity: (type: reference) (reference-id: 1)) (destination:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.2"
},
{
"text": "(spatial-relation: (relation: above) (entity: (color: blue) (color: green) (type: stack)))))) and when given to the robotic simulator resulted in the expected move being made between before and after board configurations. Due to this extra validation step, all RCL statements provided for the task were contextually well-formed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.2"
},
{
"text": "RCL is a novel linguistically-oriented semantic representation. An RCL statement is a semantic tree ( Figure 3 ) where leaf nodes generally align to words in the corresponding sentence, and nonleaves are tagged using a pre-defined set of categories. RCL is designed to annotate rich linguistic structure, including ellipsis (such as 'place [it] on'), anaphoric references ('it' and 'one'), multiword spatial expressions ('on top of') and lexical disambiguation ('one' and 'place'). Due to ellipsis, unaligned words and multi-word expressions, a leaf node may align to zero, one or more words in a sentence. Figure 4 shows the RCL syntax for the tree in Figure 3 , as accepted by the spatial planner and the simulator. As these components do not require NL word alignment data, this additional information was made available to task participants for training via a separate Java API. The tagset used to annotate RCL nodes can be divided into general tags (that are arguably applicable to other domains) and specific tags that were customized for the domain in the task (Tables 1 and 2 overleaf, respectively). The general elements are typed entities (labelled with semantic features) that are connected using relations and events. This universal formalism is not domain-specific, and is inspired by semantic frames (Fillmore and Baker, 2001 ), a practical representation used for NL understanding systems (Dzikovska, 2004; UzZaman and Allen, 2010; Coyne et al., 2010; Dukes, 2009) .",
"cite_spans": [
{
"start": 1314,
"end": 1339,
"text": "(Fillmore and Baker, 2001",
"ref_id": "BIBREF12"
},
{
"start": 1404,
"end": 1421,
"text": "(Dzikovska, 2004;",
"ref_id": "BIBREF10"
},
{
"start": 1422,
"end": 1446,
"text": "UzZaman and Allen, 2010;",
"ref_id": "BIBREF31"
},
{
"start": 1447,
"end": 1466,
"text": "Coyne et al., 2010;",
"ref_id": "BIBREF4"
},
{
"start": 1467,
"end": 1479,
"text": "Dukes, 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 102,
"end": 110,
"text": "Figure 3",
"ref_id": "FIGREF0"
},
{
"start": 607,
"end": 615,
"text": "Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 653,
"end": 661,
"text": "Figure 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Robot Control Language",
"sec_num": "3.3"
},
{
"text": "In the remainder of this section we summarize aspects of RCL that are relevant to the task; a more detailed description is provided by Dukes (2013a; . In an RCL statement such as Figure 4, a preterminal node together with its child leaf node correspond to a feature-value pair (such as the feature color and the constant blue). Two special features which are distinguished by the planner are id and reference-id, which are used for co-referencing such as for annotating anaphora and their antecedents. The remaining features model the simulated robotic domain. For",
"cite_spans": [
{
"start": 135,
"end": 148,
"text": "Dukes (2013a;",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 179,
"end": 185,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Robot Control Language",
"sec_num": "3.3"
},
{
"text": "Description sequence Used to specify a sequence of events or statements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RCL Element",
"sec_num": null
},
{
"text": "Used to specify a spatial relation between two entities or to describe a location.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "spatial-relation",
"sec_num": null
},
{
"text": "Used to specify an entity type. example, the values of the action feature are the moves used to control the robotic arm, while values of the type and relation features are the entity and relation types understood by the spatial planner (Table 2) . As well as qualitative relations (such as 'below' or 'above'), the planner also accepts spatial relations that include quantitative measurements, such as in 'two squares left of the red prism' (Figure 5 ). RCL distinguishes between relations which relate entities and indicators, which are attributes of entities (such as 'left' in 'the left cube'). For the task, participants are asked to map NL sentences to well-formed RCL by identifying spatial relations and indicators, then parsing higher-level entities and events. Finally, a well-formed RCL tree with an event (or sequence of events) at toplevel is given the simulator for execution.",
"cite_spans": [],
"ref_spans": [
{
"start": 236,
"end": 245,
"text": "(Table 2)",
"ref_id": "TABREF2"
},
{
"start": 441,
"end": 450,
"text": "(Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "type",
"sec_num": null
},
{
"text": "Out of the 3,400 sentences annotated for the task, 2,500 sentences were provided to participants for system training. During evaluation, trained systems were presented with 909 previously unseen sentences and asked to generate corresponding RCL statements, with access to the spatial planner for additional context. To keep the evaluation process as simple as possible, each parser's output for a sentence was scored as correct if it exactly matched the expected RCL statement in the treebank. Participants were asked to calculate two metrics, P and NP, which are the proportion of exact matches with and without using the spatial planner respectively: Table 3 : System results for supervised semantic parsing of the Robot Commands Treebank (P = parsing with integrated spatial planning, NP = parsing without integrated spatial planning, NP -P = drop in performance without integrated spatial planning, N/A = performance not available).",
"cite_spans": [],
"ref_spans": [
{
"start": 653,
"end": 660,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4"
},
{
"text": "These metrics contrast with measures for partially correct parsed structures, such as Parseval (Black et al., 1991) or the leaf-ancestor metric (Sampson and Babarczy, 2003) . The rationale for using a strict match is that in the integrated system, a command will only be executed if it is completely understood, as both the spatial planner and the simulator require well-formed RCL.",
"cite_spans": [
{
"start": 95,
"end": 115,
"text": "(Black et al., 1991)",
"ref_id": "BIBREF2"
},
{
"start": 144,
"end": 172,
"text": "(Sampson and Babarczy, 2003)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4"
},
{
"text": "Six teams participated in the shared task using a variety of strategies (Table 3) . The last measure in the table gives the performance drop without spatial context. The value NP -P = -2 for the best performing system suggests this as an upper bound for the task. The different values of this measure indicate the sensitivity to (or possibly reliance on) context to guide the parsing process. In the remainder of this section we compare the approaches and results of the six systems. UW-MRS: Packard (2014) achieved the best score for parsing both with and without spatial context, at 92.5% and 90.5%, respectively, using a hybrid system that combines a rule-based grammar with the Berkeley parser (Petrov et al., 2006) . The rule-based component uses the English Resource Grammar, a broad coverage handwritten HPSG grammar for English. The ERG produces a ranked list of Minimal Recursion Semantics (MRS) structures that encode predicate argument relations (Copestake et al., 2005) . Approximately 80 rules were then used to convert MRS to RCL. The highest ranked result that is validated by the spatial planner was selected as the output of the rule-based system. Using this approach, Packard reports scores of P = 82.4% and NP = 80.3% for parsing the evaluation data.",
"cite_spans": [
{
"start": 698,
"end": 719,
"text": "(Petrov et al., 2006)",
"ref_id": "BIBREF25"
},
{
"start": 957,
"end": 981,
"text": "(Copestake et al., 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 72,
"end": 81,
"text": "(Table 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Systems and Results",
"sec_num": "5"
},
{
"text": "To further boost performance, the Berkeley parser was used for back-off. To train the parser, the RCL treebank was converted to phrase struc-ture by removing non-aligned nodes and inserting additional nodes to ensure one-to-one alignment with words in NL sentences. Performance of the Berkeley parser alone was NP = 81.5% (no P-measure was available as spatial planning was not integrated).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems and Results",
"sec_num": "5"
},
{
"text": "To combine components, the ERG was used initially, with fall back to the Berkeley parser when no contextually compatible RCL statement was produced. The hybrid approach improved accuracy considerably, with P = 92.5% and NP = 90.5%. Interestingly, Packard also performs precision and recall analysis, and reports that the rule-based component had higher precision, while the statistical component had higher recall, with the combined system outperforming each separate component in both precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems and Results",
"sec_num": "5"
},
{
"text": "The system by Stoyanchev et al. (2014) scored second best for contextual parsing and third best for parsing without using the spatial planner (P = 87.35% and NP = 60.84%). In contrast to Packard's UW-MRS submission, the AT&T system is a combination of three statistical models for tagging, parsing and reference resolution. During the tagging phase, a two-stage sequence tagger first assigns a part-of-speech tag to each word in a sentence, followed by an RCL feature-value pair such as (type: cube) or (color: blue), with unaligned words tagged as 'O'. For parsing, a constituency parser was trained using non-lexical RCL trees. Finally, anaphoric references were resolved using a maximum entropy feature model. When combined, the three components generate a list of weighted RCL trees, which are filtered by the spatial planner. Without integrated planning, the most-probable parse tree is selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AT&T Labs Research:",
"sec_num": null
},
{
"text": "In their evaluation, Stoyanchev et al. report accuracy scores for the separate phases as well as for the combined system. For the tagger, they report an accuracy score of 95.2%, using the standard split of 2,500 sentences for training and 909 for evaluation. To separately measure the joint accuracy of the parser together with reference resolution, gold-standard tags were used resulting in a performance of P = 94.83% and NP = 67.55%. However, using predicted tags, the system's final performance dropped to P = 87.35% and NP = 60.84%. To measure the effect of less supervision, the models were additionally trained on only 500 sentences. In this scenario, the tagging model degraded significantly, while the parsing and reference resolution models performed nearly as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AT&T Labs Research:",
"sec_num": null
},
{
"text": "RoBox: Using Combinatory Categorial Grammar (CCG) as a semantic parsing framework has been previously shown to be suitable for translating NL into logical form. Inspired by previous work using a CCG parser in combination with a structured perceptron (Zettlemoyer and Collins, 2007) , RoBox (Evang and Bos, 2014) was the best performing CCG system in the shared task scoring P = 86.8% and NP = 79.21%.",
"cite_spans": [
{
"start": 250,
"end": 281,
"text": "(Zettlemoyer and Collins, 2007)",
"ref_id": "BIBREF33"
},
{
"start": 290,
"end": 311,
"text": "(Evang and Bos, 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AT&T Labs Research:",
"sec_num": null
},
{
"text": "Using a similar approach to UW-MRS for its statistical component, RCL trees were interpreted as phrase-structure and converted to CCG derivations for training. During decoding, RCL statements were generated directly by the CCG parser. However, in contrast to the approach used by the AT&T system, RoBox interfaces with the planner during parsing instead of performing spatial validation a post-processing step. This enables early resolution of attachment ambiguity and helps constrain the search space. However, the planner is only used to validate entity elements, so that event and sequence elements were not validated. As a further difference to the AT&T system, anaphora resolution was not performed using a statistical model. Instead, multiple RCL trees were generated with different candidate anaphoric references, which were filtered out contextually using the spatial planner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AT&T Labs Research:",
"sec_num": null
},
{
"text": "RoBox suffered only a 7.59% absolute drop in performance without using spatial planning, second only to UW-MRS at 2%. Evang and Bos perform error analysis on RoBox and report that most errors relate to ellipsis, the ambiguous word one, anaphora or attachment ambiguity. They suggest that the system could be improved with better feature selection or by integrating the CCG parser more closely with the spatial planner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AT&T Labs Research:",
"sec_num": null
},
{
"text": "Shrdlite: The Shrdlite system by Ljungl\u00f6f (2014) , inspired by the Classic SHRDLU system by Winograd (1972) , is a purely rule-based sys-tem that was shown to be effective for the task. Scoring P = 86.1% and NP = 51.5%, Shrdlite ranked fourth for parsing with integrated planning, and fifth without using spatial context. However, it suffered the largest absolute drop in performance without planning (34.6 points), indicating that integration with the planner is essential for the system's reported accuracy.",
"cite_spans": [
{
"start": 33,
"end": 48,
"text": "Ljungl\u00f6f (2014)",
"ref_id": "BIBREF21"
},
{
"start": 92,
"end": 107,
"text": "Winograd (1972)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AT&T Labs Research:",
"sec_num": null
},
{
"text": "Shrdlite uses a hand-written compact unification grammar for the fragment of English appearing in the training data. The grammar is small, consisting of only 25 grammatical rules and 60 lexical rules implemented as a recursive-descent parser in Prolog. The lexicon consists of 150 words (and multi-word expressions) divided into 23 lexical categories, based on the RCL preterminal nodes found in the treebank. In a postprocessing phase, the resulting parse trees are normalized to ensure that they are well-formed by using a small set of supplementary rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AT&T Labs Research:",
"sec_num": null
},
{
"text": "However, the grammar is highly ambiguous resulting in multiple parses for a given input sentence. These are filtered by the spatial planner. If multiple parse trees were found to be compatible with spatial context (or when not using the planner), the tree with the smallest number of nodes was selected as the parser's final output. Additionally, because both the training and evaluation data were collected via crowdsourcing, sentences occasionally contain spelling errors, which were intentionally included in the task. To handle misspelt words, Shrdlite uses Levenshtein edit distance with a penalty to reparse sentences when the parser initially fails to produce any analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AT&T Labs Research:",
"sec_num": null
},
{
"text": "The CCG system by Mattelaer et al. (2014) uses a different approach to the RoBox system described previously. KUL-Eval scored P = 71.29% and NP = 57.76% in comparison to the RoBox scores of P = 86.8% and NP = 79.21%.",
"cite_spans": [
{
"start": 18,
"end": 41,
"text": "Mattelaer et al. (2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "KUL-Eval:",
"sec_num": null
},
{
"text": "During training, the RCL treebank was converted to \u03bb-expressions. This process is fully reversible, so that no information in an RCL tree is lost during conversion. In contrast to RoBox, but in common with the AT&T parser, KUL-Eval performs spatial validation as a post-processing step and does not integrate the planner directly into the parsing process. A probabilistic CCG is used for parsing, so that multiple \u03bb-expressions are returned (each with an associated confidence measure) that are translated into RCL. Finally, in the validation step, the spatial planner is used to discard RCL statements that are incompatible with spatial context and the remaining mostprobable parse is returned as the system's output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KUL-Eval:",
"sec_num": null
},
{
"text": "Mattelaer et al. note that in several cases the parser produced partially correct statements but that these outputs did not contribute to the final score, given the strictly matching measures used for the P and NP metrics. However, well-formed RCL statements are required by the spatial planner and robotic simulator for the integrated system to robustly execute the specified NL command. Partially correct structures included statements which almost matched the expected RCL tree with the exception of incorrect featurevalues, or the addition or deletion of nodes. The most common errors were feature-values with incorrect entity types (such as 'edge' and 'region') and mismatched spatial relations (such as confusing 'above ' and 'within' and confusing 'right', 'left' and 'front') .",
"cite_spans": [
{
"start": 726,
"end": 783,
"text": "' and 'within' and confusing 'right', 'left' and 'front')",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "KUL-Eval:",
"sec_num": null
},
{
"text": "UWM: The UWM system submitted by Kate (2014) uses an existing semantic parser, KRISP, for the shared task. KRISP (Kernel-based Robust Interpretation for Semantic Parsing) is a trainable semantic parser (Kate and Mooney, 2006 ) that uses Support Vector Machines (SVMs) as the machine learning method with a string subsequence kernel. As well as training data consisting of RCL paired with NL commands, KRISP required a context-free grammar for RCL, which was hand-written for UWM. During training, id nodes were removed from the RCL trees. These were recovered after parsing in a post-processing phase to resolve anaphora by matching to the nearest preceding antecedent.",
"cite_spans": [
{
"start": 33,
"end": 44,
"text": "Kate (2014)",
"ref_id": "BIBREF16"
},
{
"start": 202,
"end": 224,
"text": "(Kate and Mooney, 2006",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "KUL-Eval:",
"sec_num": null
},
{
"text": "In contrast to other systems submitted for the task, UWM does not interface with the spatial planner and parses purely non-contextually. Because the planner was not used, the system's accuracy was negatively impacted by simple issues that may have been easily resolved using spatial context. For example, in RCL, the verb 'place' can map to either drop or move actions, depending on whether or not a block is held in the gripper in the corresponding spatial scene. Without using spatial context, it is hard to distinguish between these cases during parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KUL-Eval:",
"sec_num": null
},
{
"text": "The system scored a non-contextual measure of NP = 45.98%, with Kate reporting a 51.18% best F-measure (at 72.67% precision and 39.49% recall). No P-measure was reported as the spatial planner was not used. Due to memory constraints when training the SVM classifiers, only 1,500 out of 2,500 possible sentences were used from the treebank to build the parsing model. However, it may be possible to increasing the size of training data in future work through sampling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KUL-Eval:",
"sec_num": null
},
{
"text": "The six systems evaluated for the task employed a variety of semantic parsing strategies. With the exception of one submission, all systems interfaced with the spatial planner, either in a postprocessing phase, or directly during parsing to enable early disambiguation and to help constrain the search space. An open question that remains following the task is how applicable these methods would be to other domains. Systems that relied heavily on the planner to guide the parsing process could only be adapted to domains for a which a planner could conceivably exist. For example, nearly all robotic tasks such as such as navigation, object manipulation and task execution involve aspects of planning. NL question-answering interfaces to databases or knowledge stores are also good candidates for this approach, since parsing NL questions into a semantic representation within the context of a database schema or an ontology could be guided by a query planner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "However, approaches with a more attractive NP -P measure (such as UW-MRS and RoBox) are arguably more easily generalized to other domains, as they are less reliant on a planner. Additionally, the usual arguments for rule-based systems verses supervised statistical systems apply to any discussion on domain adaptation: rulebased systems require human manual effort, while supervised statistical systems required annotated data for the new domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In comparing the best two statistical systems (AT&T and RoBox) it is interesting to note that these performed similarly with integrated planning (P = 87.35% and 86.80%, respectively), but differed considerably without planning (NP = 60.84% and 79.21%). As these two systems employed different parsers (a constituency parser and a CCG parser), it is difficult to perform a direct comparison to understand why the AT&T system is more reliant on spatial context. It would also be interesting to understand, in further work, why the two CCG-based systems differed considerably in their P and NP scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "It is also surprising that the best performing system, UW-MRS, suffered only a 2% drop in performance without using the planner, demonstrating clearly that in the majority of sentences in the evaluation data, spatial context is not actually required to perform semantic parsing. Although as shown by the NP -P scores, spatial context can dramatically boost performance of certain approaches for the task when used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "This paper described a new task for SemEval: Supervised Semantic Parsing of Robotic Spatial Commands. Despite its novel nature, the task attracted high-quality submissions from six teams, using a variety of semantic parsing strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "It is hoped that this task will reappear at Se-mEval. Several lessons were learnt from this first version of the shared task which can be used to improve the task in future. One issue which several participants noted was the way in which the treebank was split into training and evaluation datasets. Out of the 3,409 sentences in the treebank, the first 2,500 sequential sentences were chosen for training. Because this data was not randomized, certain syntactic structures were only found during evaluation and were not present in the training data. Although this may have affected results, all participants evaluated their systems against the same datasets. Based on participant feedback, in addition to reporting P and NP-measures, it would also be illuminating to include a metric such as Parseval F1-scores to measure partial accuracy. An improved version of the task could also feature a better dataset by expanding the treebank, not only in terms of size but also in terms of linguistic structure. Many commands captured in the annotation game are not yet represented in RCL due to linguistic phenomena such as negation and conditional statements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Looking forward, a more promising approach to improving the spatial planner could be probabilistic planning, so that semantic parsers could interface with probabilistic facts with confidence measures. This approach is particularly suitable for robotics, where sensors often supply noisy signals about the robot's environment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "https://github.com/kaisdukes/train-robotsFigure 2: Integrated command understanding system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The author would like to thank the numerous volunteer annotators who helped develop the dataset used for the task using crowdsourcing, by participating in the online game-with-a-purpose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bootstrapping Semantic Parsers from Conversations",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "421--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Artzi and Luke Zettlemoyer. 2011. Bootstrap- ping Semantic Parsers from Conversations. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP (pp. 421-432).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semantic parsing via paraphrasing",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics, ACL",
"volume": "",
"issue": "",
"pages": "1415--1425",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of the Conference of the Association for Computational Linguistics, ACL (pp. 1415-1425).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Procedure for Quantitatively Comparing the Syntactic Coverage of English Grammars",
"authors": [
{
"first": "Ezra",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Gdaniec",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "306--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ezra Black, Steven Abney, Dan Flickinger, Claudia Gdaniec, et al. 1991. A Procedure for Quantita- tively Comparing the Syntactic Coverage of Eng- lish Grammars. In Proceedings of the DARPA Speech and Natural Language Workshop (pp. 306- 311). San Mateo, California.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Minimal Recursion Semantics: An Introduction",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
}
],
"year": 2005,
"venue": "Research on Language and Computation",
"volume": "3",
"issue": "2",
"pages": "281--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Copestake, et al. 2005. Minimal Recursion Se- mantics: An Introduction. Research on Language and Computation, 3(2) (pp. 281-332).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Frame Semantics in Text-to-Scene Generation. Knowledge-Based and Intelligent Information and Engineering Systems",
"authors": [
{
"first": "Bob",
"middle": [],
"last": "Coyne",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "375--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bob Coyne, Owen Rambow, et al. 2010. Frame Se- mantics in Text-to-Scene Generation. Knowledge- Based and Intelligent Information and Engineering Systems (pp. 375-384). Springer, Berlin.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Why Computers May Never Think Like People. Readings in the Philosophy of Technology",
"authors": [
{
"first": "Hubert",
"middle": [],
"last": "Dreyfus",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Dreyfus",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hubert Dreyfus and Stuart Dreyfus. 2009. Why Com- puters May Never Think Like People. Readings in the Philosophy of Technology.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "LOGICON: A System for Extracting Semantic Structure using Partial Parsing",
"authors": [
{
"first": "Kais",
"middle": [],
"last": "Dukes",
"suffix": ""
}
],
"year": 2009,
"venue": "ternational Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "18--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kais Dukes. 2009. LOGICON: A System for Extract- ing Semantic Structure using Partial Parsing. In In- ternational Conference on Recent Advances in Natural Language Processing, RANLP (pp. 18- 22). Borovets, Bulgaria.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semantic Annotation of Robotic Spatial Commands",
"authors": [
{
"first": "Kais",
"middle": [],
"last": "Dukes",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Language and Technology Conference, LTC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kais Dukes. 2013a. Semantic Annotation of Robotic Spatial Commands. In Proceedings of the Lan- guage and Technology Conference, LTC.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Train Robots: A Dataset for Natural Language Human-Robot Spatial Interaction through Verbal Commands",
"authors": [
{
"first": "Kais",
"middle": [],
"last": "Dukes",
"suffix": ""
}
],
"year": 2013,
"venue": "International Conference on Social Robotics. Embodied Communication of Goals and Intentions Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kais Dukes. 2013b. Train Robots: A Dataset for Natural Language Human-Robot Spatial Interac- tion through Verbal Commands. In International Conference on Social Robotics. Embodied Com- munication of Goals and Intentions Workshop.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Contextual Semantic Parsing using Crowdsourced Spatial Descriptions. Computation and Language",
"authors": [
{
"first": "Kais",
"middle": [],
"last": "Dukes",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1405.0145"
]
},
"num": null,
"urls": [],
"raw_text": "Kais Dukes. 2014. Contextual Semantic Parsing using Crowdsourced Spatial Descriptions. Computation and Language, arXiv:1405.0145 [cs.CL]",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A Practical Semantic Representation For Natural Language Parsing",
"authors": [
{
"first": "Myroslava",
"middle": [],
"last": "Dzikovska",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myroslava Dzikovska 2004. A Practical Semantic Representation For Natural Language Parsing. PhD Thesis. University of Rochester.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "RoBox: CCG with Structured Perceptron for Supervised Semantic Parsing of Robotic Spatial Commands",
"authors": [
{
"first": "Kilian",
"middle": [],
"last": "Evang",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kilian Evang and Johan Bos. 2014. RoBox: CCG with Structured Perceptron for Supervised Seman- tic Parsing of Robotic Spatial Commands. In Pro- ceedings of the International Workshop on Seman- tic Evaluation, SemEval.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Frame semantics for Text Understanding",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Fillmore",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Baker",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of WordNet and Other Lexical Resources Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Fillmore and Collin Baker. 2001. Frame se- mantics for Text Understanding. In Proceedings of WordNet and Other Lexical Resources Workshop.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Using String Kernels for Learning Semantic Parsers",
"authors": [
{
"first": "Rohit",
"middle": [],
"last": "Kate",
"suffix": ""
},
{
"first": "Ray",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the International Conference on Computational Linguistics and Annual Meeting of the Association for Computational Linguistics, COL-ING-ACL",
"volume": "",
"issue": "",
"pages": "913--920",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohit Kate and Ray Mooney. 2006. Using String Kernels for Learning Semantic Parsers. In Pro- ceedings of the International Conference on Com- putational Linguistics and Annual Meeting of the Association for Computational Linguistics, COL- ING-ACL (pp. 913-920).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Joint Entity and Relation Extraction using Card-Pyramid Parsing",
"authors": [
{
"first": "Rohit",
"middle": [],
"last": "Kate",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Conference on Computational Natural Language Learning, CoNLL",
"volume": "",
"issue": "",
"pages": "203--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohit Kate and Raymond Mooney. 2010. Joint Entity and Relation Extraction using Card-Pyramid Pars- ing. In Proceedings of the Conference on Compu- tational Natural Language Learning, CoNLL (pp. 203-212).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning to Transform Natural to Formal Languages",
"authors": [
{
"first": "Rohit",
"middle": [],
"last": "Kate",
"suffix": ""
},
{
"first": "Yuk",
"middle": [
"Wah"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1062--1068",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohit Kate, Yuk Wah Wong and Raymond Mooney. 2005. Learning to Transform Natural to Formal Languages. In Proceedings of the National Confer- ence on Artificial Intelligence (pp. 1062-1068).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "UWM: Applying an Existing Trainable Semantic Parser to Parse Robotic Spatial Commands",
"authors": [
{
"first": "Rohit",
"middle": [],
"last": "Kate",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohit Kate. 2014. UWM: Applying an Existing Trainable Semantic Parser to Parse Robotic Spatial Commands. In Proceedings of the International Workshop on Semantic Evaluation, SemEval.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Unsupervised PCFG Induction for Grounded Language Learning with Highly Ambiguous Supervision",
"authors": [
{
"first": "Joohyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "433--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joohyun Kim and Raymond Mooney. 2012. Unsuper- vised PCFG Induction for Grounded Language Learning with Highly Ambiguous Supervision. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL (pp. 433-444).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Weakly Supervised Training of Semantic Parsers",
"authors": [
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jayant Krishnamurthy and Tom Mitchell. 2012. Weakly Supervised Training of Semantic Parsers. In Proceedings of the Joint Conference on Empiri- cal Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Guiding a Reinforcement Learner with Natural Language Advice: Initial Results in RoboCup Soccer",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the AAAI Workshop on Supervisory Control of Learning and Adaptive Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Kuhlmann et al. 2004. Guiding a Reinforce- ment Learner with Natural Language Advice: Ini- tial Results in RoboCup Soccer. In Proceedings of the AAAI Workshop on Supervisory Control of Learning and Adaptive Systems.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Scaling Semantic Parsers with On-the-fly Ontology Matching",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Eunsol Choi, Yoav Artzi and Luke Zettlemoyer. 2013. Scaling Semantic Parsers with On-the-fly Ontology Matching. In Proceed- ings of the Conference on Empirical Methods in Natural Language Processing, EMNLP.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Shrdlite: Semantic Parsing using a Handmade Grammar",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Ljungl\u00f6f",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Ljungl\u00f6f. 2014. Shrdlite: Semantic Parsing us- ing a Handmade Grammar. In Proceedings of the International Workshop on Semantic Evaluation, SemEval.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "KUL-Eval: A Combinatory Categorial Grammar Approach for Improving Semantic Parsing of Robot Commands using Spatial Context",
"authors": [
{
"first": "Willem",
"middle": [],
"last": "Mattelaer",
"suffix": ""
},
{
"first": "Mathias",
"middle": [],
"last": "Verbeke",
"suffix": ""
},
{
"first": "Davide",
"middle": [],
"last": "Nitti",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Willem Mattelaer, Mathias Verbeke and Davide Nitti. 2014. KUL-Eval: A Combinatory Categorial Grammar Approach for Improving Semantic Pars- ing of Robot Commands using Spatial Context. In Proceedings of the International Workshop on Se- mantic Evaluation, SemEval.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Anaphora Resolution: The State of the Art",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruslan Mitkov. 1999. Anaphora Resolution: The State of the Art. Technical Report. University of Wolverhampton.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "UW-MRS: Leveraging a Deep Grammar for Robotic Spatial Commands",
"authors": [
{
"first": "Woodley",
"middle": [],
"last": "Packard",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Woodley Packard. 2014. UW-MRS: Leveraging a Deep Grammar for Robotic Spatial Commands. In Proceedings of the International Workshop on Se- mantic Evaluation, SemEval.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning Accurate, Compact, and Interpretable Tree Annotation",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the International Conference on Computational Linguistics and the Annual Meeting of the Association for Computational Linguistics, COLING-ACL",
"volume": "",
"issue": "",
"pages": "433--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, et al. 2006. Learning Accurate, Compact, and Interpretable Tree Annotation. In Proceedings of the International Conference on Computational Linguistics and the Annual Meeting of the Associa- tion for Computational Linguistics, COLING-ACL (pp. 433-440).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Grounded Unsupervised Semantic Parsing",
"authors": [
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics, ACL",
"volume": "",
"issue": "",
"pages": "466--477",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoifung Poon. 2013. Grounded Unsupervised Seman- tic Parsing. In Proceedings of the Conference of the Association for Computational Linguistics, ACL (pp. 466-477).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A Test of the Leaf-Ancestor Metric for Parse Accuracy",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Sampson",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Babarczy",
"suffix": ""
}
],
"year": 2003,
"venue": "Natural Language Engineering",
"volume": "9",
"issue": "4",
"pages": "365--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Sampson and Anna Babarczy. 2003. A Test of the Leaf-Ancestor Metric for Parse Accuracy. Natural Language Engineering, 9.4 (pp. 365-380).",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "AT&T Labs Research: Tag&Parse Approach to Semantic Parsing of Robot Spatial Commands",
"authors": [
{
"first": "Svetlana",
"middle": [],
"last": "Stoyanchev",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svetlana Stoyanchev, et al. 2014. AT&T Labs Re- search: Tag&Parse Approach to Semantic Parsing of Robot Spatial Commands. In Proceedings of the International Workshop on Semantic Evaluation, SemEval.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Using Multiple Clause Constructors in Inductive Logic Programming for Semantic Parsing",
"authors": [
{
"first": "Lappoon",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2001,
"venue": "Machine Learning, ECML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lappoon Tang and Raymond Mooney. 2001. Using Multiple Clause Constructors in Inductive Logic Programming for Semantic Parsing. Machine Learning, ECML.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Approaching the Symbol Grounding Problem with Probabilistic Graphical Models",
"authors": [
{
"first": "Stefanie",
"middle": [],
"last": "Tellax",
"suffix": ""
}
],
"year": 2011,
"venue": "AI Magazine",
"volume": "32",
"issue": "4",
"pages": "64--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefanie Tellax, et al. 2011. Approaching the Symbol Grounding Problem with Probabilistic Graphical Models. AI Magazine, 32:4 (pp. 64-76).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "TRIPS and TRIOS System for TempEval-2",
"authors": [
{
"first": "Naushad",
"middle": [],
"last": "Uzzaman",
"suffix": ""
},
{
"first": "James",
"middle": [
"Allen"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval",
"volume": "",
"issue": "",
"pages": "276--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naushad UzZaman and James Allen. 2010. TRIPS and TRIOS System for TempEval-2. In Proceed- ings of the International Workshop on Semantic Evaluation, SemEval (pp. 276-283).",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Understanding Natural Language",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Winograd",
"suffix": ""
}
],
"year": 1972,
"venue": "Cognitive Psychology",
"volume": "3",
"issue": "1",
"pages": "1--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Winograd. 1972. Understanding Natural Lan- guage. Cognitive Psychology, 3:1 (pp. 1-191).",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Online Learning of Relaxed CCG Grammars for Parsing to Logical Form",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "878--887",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke Zettlemoyer and Michael Collins. 2007. Online Learning of Relaxed CCG Grammars for Parsing to Logical Form. In Proceedings of the Joint Confer- ence on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL (pp. 878-887).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Semantic tree from the treebank with an elliptical anaphoric node and its annotated antecedent.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "RCL representation with co-referencing.",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": ".ure 5: A quantitative relation with a landmark.",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>Category</td><td>Values</td></tr><tr><td>Actions</td><td>move, take, drop</td></tr><tr><td/><td>left, right, above, below,</td></tr><tr><td>Relations</td><td>forward, backward, adjacent, within, between, nearest, near,</td></tr><tr><td/><td>furthest, far, part</td></tr><tr><td/><td>left, leftmost, right, rightmost,</td></tr><tr><td>Indicators</td><td>top, highest, bottom, lowest, front, back, individual, furthest,</td></tr><tr><td/><td>nearest, center</td></tr><tr><td/><td>cube, prism, corner, board stack,</td></tr><tr><td>entity types</td><td>row, column, edge, tile, robot,</td></tr><tr><td/><td>region, reference, type-reference</td></tr><tr><td>Colors</td><td>blue, cyan, red, yellow, green, magenta, gray, white</td></tr></table>",
"type_str": "table",
"text": "Universal semantic elements in RCL.",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Semantic categories customized for the task.",
"num": null
}
}
}
}