|
{ |
|
"paper_id": "S17-1029", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:28:51.385935Z" |
|
}, |
|
"title": "Learning to Solve Geometry Problems from Natural Language Demonstrations in Textbooks", |
|
"authors": [ |
|
{ |
|
"first": "Mrinmaya", |
|
"middle": [], |
|
"last": "Sachan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Xing", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Humans as well as animals are good at imitation. Inspired by this, the learning by demonstration view of machine learning learns to perform a task from detailed example demonstrations. In this paper, we introduce the task of question answering using natural language demonstrations where the question answering system is provided with detailed demonstrative solutions to questions in natural language. As a case study, we explore the task of learning to solve geometry problems using demonstrative solutions available in textbooks. We collect a new dataset of demonstrative geometry solutions from textbooks and explore approaches that learn to interpret these demonstrations as well as to use these interpretations to solve geometry problems. Our approaches show improvements over the best previously published system for solving geometry problems.", |
|
"pdf_parse": { |
|
"paper_id": "S17-1029", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Humans as well as animals are good at imitation. Inspired by this, the learning by demonstration view of machine learning learns to perform a task from detailed example demonstrations. In this paper, we introduce the task of question answering using natural language demonstrations where the question answering system is provided with detailed demonstrative solutions to questions in natural language. As a case study, we explore the task of learning to solve geometry problems using demonstrative solutions available in textbooks. We collect a new dataset of demonstrative geometry solutions from textbooks and explore approaches that learn to interpret these demonstrations as well as to use these interpretations to solve geometry problems. Our approaches show improvements over the best previously published system for solving geometry problems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Cognitive science emphasizes the importance of imitation or learning by example (Meltzoff and Moore, 1977; Meltzoff, 1995) in human learning. When a teacher signals a pedagogical intention, children tend to imitate the teacher's actions (Buchsbaum et al., 2011; Butler and Markman, 2014) . Inspired by this phenomenon, the learning by demonstration view of machine learning (Schaal, 1997; Argall et al., 2009; Goldwasser and Roth, 2014) assumes training data in the form of example demonstrations. A task is demonstrated by a teacher and the learner generalizes from these demonstrations in order to execute the task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 106, |
|
"text": "(Meltzoff and Moore, 1977;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 122, |
|
"text": "Meltzoff, 1995)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 237, |
|
"end": 261, |
|
"text": "(Buchsbaum et al., 2011;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 287, |
|
"text": "Butler and Markman, 2014)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 388, |
|
"text": "(Schaal, 1997;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 389, |
|
"end": 409, |
|
"text": "Argall et al., 2009;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 436, |
|
"text": "Goldwasser and Roth, 2014)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we introduce the novel task of question answering using natural language Figure 1: Above: An example SAT style geometry problem with the text description, corresponding diagram and (optionally) answer candidates. Below: A logical expression that represents the meaning of the text description and the diagram in the problem. GEOS derives a weighted logical expression where each predicates also carry a weighted score but we do not show them here for clarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "demonstrations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Research in question answering has traditionally focused on learning from question-answer pairs (Burger et al., 2001 ). However, it is well-established in the educational psychology literature (Allington and Cunningham, 2010; Felder et al., 2000) that children tend to learn better and faster from concrete illustrations and demonstrations. In this paper, we raise the question -\"Can we leverage demonstrative solutions for questions as provided by a teacher to improve our question answering systems?\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 116, |
|
"text": "(Burger et al., 2001", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 193, |
|
"end": 225, |
|
"text": "(Allington and Cunningham, 2010;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 226, |
|
"end": 246, |
|
"text": "Felder et al., 2000)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As a case study, we propose the task of learning to solve SAT geometry problems (such as the one in Figure 1 ) using demonstrative solutions to these problems (such as the one in Figure 2 ). Such demonstrations are common in textbooks as they help students learn how to solve geometry problems effectively. We build a new dataset of demonstrative solutions of geometry problems and show that it can be used to improve GEOS (Seo et al., 2015) , the state-of-the-art in solving geom-1. Sum of interior angles of a triangle is 180 0 => OAM + AMO + MOA = 180 0 => MOA = 60 0 2. Similar triangle theorem => MOB ~ MOA => MOB = MOA = 60 0 3. AOB = MOB + MOA => AOB = 120 0 4. Angle subtended by a chord at the center is twice the angle subtended at the circumference => ADB = 0.5 x AOB = 60 0 Figure 2 : An example demonstration on how to solve the problem in Figure 1: (1) Use the theorem that the sum of interior angles of a triangle is 180 \u2022 and additionally the fact that \u2220AMO is 90 \u2022 to conclude that \u2220MOA is 60 \u2022 . (2) Conclude that MOA \u223c MOB (using a similar triangle theorem) and then, conclude that \u2220MOB = \u2220MOA = 60 \u2022 (using the theorem that corresponding angles of similar triangles are equal).", |
|
"cite_spans": [ |
|
{ |
|
"start": 423, |
|
"end": 441, |
|
"text": "(Seo et al., 2015)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 108, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 187, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 786, |
|
"end": 794, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 853, |
|
"end": 862, |
|
"text": "Figure 1:", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(3) Use angle sum rule to conclude that \u2220AOB = \u2220MOB + \u2220MOA = 120 \u2022 . (4) Use the theorem that the angle subtended by an arc of a circle at the centre is double the angle subtended by it at any point on the circle to conclude that \u2220ADB = 0.5\u00d7\u2220AOB = 60 \u2022 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "etry problems. We also present a technique inspired from recent work in situated question answering (Krishnamurthy et al., 2016) that jointly learns how to interpret the demonstration and use this interpretation to solve geometry problems. We model the interpretation task (the task of recognizing various states in the demonstration) as a semantic parsing task. We model state transitions in the demonstration via a deduction model that treats each application of a theorem of geometry as a state transition. We describe techniques to learn the two models separately as well as jointly from various kinds of supervision: (a) when we only have a set of question-answer pairs as supervision, (b) when we have a set of questions and demonstrative solutions for them, and (c) when we have a set of question-answer pairs and a set of demonstrations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 128, |
|
"text": "(Krishnamurthy et al., 2016)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "An important benefit of our approach is 'interpretability'. While GEOS is uninterpretable, our approach utilizes known theorems of geometry to deductively solve geometry problems. Our approach also generates demonstrative solutions (like Figure 2) as a by-product which can be pro-vided to students on educational platforms such as MOOCs to assist in their learning.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 247, |
|
"text": "Figure 2)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We present an experimental evaluation of our approach on the two datasets previously introduced in Seo et al. (2015) and a new dataset collected by us from a number of math textbooks in India. Our experiments show that our approach of leveraging demonstrations improves GEOS. We also performed user studies with a number of school students studying geometry, who found that our approach is more interpretable as well as more useful in comparison to GEOS.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 116, |
|
"text": "Seo et al. (2015)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "GEOS solves geometry problems via a multi-stage approach. It first learns to parse the problem text and the diagram to a formal problem description compatible with both of them. The problem description is a first-order logic expression (see Figure 1) that includes known numbers or geometrical entities (e.g. 4 cm) as constants, unknown numbers or geometrical entities (e.g. O) as variables, geometric or arithmetic relations (e.g. isLine, is-Triangle) as predicates and properties of geometrical entities (e.g. measure, liesOn) as functions. The parser first learns a set of relations that potentially correspond to the problem text (or diagram) along with confidence scores. Then, a subset of relations that maximize the joint text and diagram score are picked as the problem description.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 247, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Background: GEOS", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For diagram parsing, GEOS uses a publicly available diagram parser for geometry problems (Seo et al., 2014 ) that provides confidence scores for each literal to be true in the diagram. We use the diagram parser from GEOS to handle in our work too.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 106, |
|
"text": "(Seo et al., 2014", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background: GEOS", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Text parsing is performed in three stages. The parser first maps words or phrases in the text to their corresponding concepts. Then, it identifies relations between identified concepts. Finally, it performs relation completion which handles implications and coordinating conjunctions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background: GEOS", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Finally, GEOS uses a numerical approach to check the satisfiablity of literals, and to answer the multiple-choice question. While this solver is grounded in coordinate geometry and indeed works well, it has some issues: GEOS requires an explicit mapping of each predicate to a set of constraints over point coordinates. For example, the predicate isPerpendicular(AB, CD) is mapped to the constraint straints can be non-trivial to write and often require manual engineering. As a result, GEOS's constraint set is incomplete and it cannot solve a number of SAT style geometry problems. Furthermore, this solver is not interpretable. As our user studies show, it is not natural for a student to understand the solution of these geometry problems in terms of satisfiability of constraints over coordinates. A more natural way for students to understand and reason about these problems is through deductive reasoning using well-known axioms and theorems of geometry. This kind of deductive reasoning is used in explanations in textbooks. In contrast to GEOS which uses supervised learning, our approach learns to solve geometry problems by interpreting natural language demonstrations of the solution. These demonstrations illustrate the process of solving the geometry problem via stepwise application of geometry theorems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background: GEOS", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "y B \u2212y A x B \u2212x A \u00d7 y D \u2212y C x D \u2212x C = \u22121. These con-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background: GEOS", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We represent theorems as horn clause rules that map a premise in the logical language to a conclusion in the same language. Table 1 gives some examples of geometry theorems written as horn clause rules. The free variables in the theorems are universally quantified. The variables are also typed. For example, ABC can be of type triangle or angle but not line. Let T be the set of theorems. Formally, each theorem t \u2208 T maps a logical formula l (pr) t corresponding to the premise to a logical formula l (co) t corresponding to the conclusion. The demonstration can be seen as a program -a sequence of horn clause rule applications that lead to the solution of the geometry problem. Given a current state, theorem t can be applied to the state if there exists an assignment to free variables in l (pr) t that is true in the state. Each theorem application also has a probability associated with it; in our case, these probabilities are learned by a trained model. The state diagram for the demonstration in Figure 2 is shown in Figure 3 . Now, we describe the various components of our learning from demonstrations approach: a se- Figure 2 . Theorems applied are marked in green and the state information is marked in red. Here S 0 corresponds to the state derived from question interpretation and each theorem application subsequently adds new predicates to the logical formula corresponding to S 0 . The final state contains the answer: measure(ADB, 60 \u2022 ). This annotation of states and theorem applications is provided only for illustrative purposes. It is not required by our model. mantic parser to interpret the demonstration and a deductive solver that learns to chain theorems.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 131, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 1006, |
|
"end": 1014, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1027, |
|
"end": 1036, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 1131, |
|
"end": 1139, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Theorems as Horn Clause Rules", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We first describe a semantic parser that maps a piece of text (in the geometry question or a demonstration) to a logical expression such as the one shown in Figure 1 . Our semantic parser uses a part-based log-linear model inspired from the multi-step approach taken in GEOS, which, inturn is closely related to prior work in relation extraction and semantic role labeling. However, unlike GEOS, our parser combines the various steps in a joint model. Our parser first maps words or phrases in the input text x to corresponding concepts in the geometry language. Then, it identifies relations between identified concepts. Finally, it performs relation completion to handle implications and coordinating conjunctions. We choose a log-linear model over the parses which decomposes into two parts. Let p = {p 1 , p 2 } where p 1 denotes the concepts identified in p and p 2 denotes the identified relations. The relation completion is performed by using a similar rule-based approach as in GEOS. The log-linear model also factorizes into two components for concept and relation identification:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 165, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Interpretation via Semantic Parsing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "P(p|x;\u03b8 \u03b8 \u03b8 p ) = 1 Z(x;\u03b8 \u03b8 \u03b8 p ) exp \u03b8 \u03b8 \u03b8 T p \u03c6 \u03c6 \u03c6 (p, x) \u03b8 \u03b8 \u03b8 T p \u03c6 \u03c6 \u03c6 (p, x) = \u03b8 \u03b8 \u03b8 T p1 \u03c6 1 \u03c6 1 \u03c6 1 (p 1 , x) +\u03b8 \u03b8 \u03b8 T p2 \u03c6 \u03c6 \u03c6 2 (p 2 , x) Z(x;\u03b8 \u03b8 \u03b8 p )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpretation via Semantic Parsing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "is the partition function of the log-linear model and \u03c6 \u03c6 \u03c6 is the concatenation", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpretation via Semantic Parsing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "[\u03c6 \u03c6 \u03c6 1 \u03c6 \u03c6 \u03c6 2 ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpretation via Semantic Parsing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The complexity of searching for the highest scoring latent parse is exponential. Hence, we use beam search with a fixed beam size (100) for inference. That is, in each step, we only expand the ten most promising candidates so far given by the current score. We first infer p 1 to identify a beam of concepts. Then, we infer p 2 to identify relations among candidate concepts. We find the optimal parameters \u03b8 \u03b8 \u03b8 p using maximum-likelihood estimation with L2 regularization:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpretation via Semantic Parsing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u03b8 \u03b8 \u03b8 * p = arg max \u03b8 \u03b8 \u03b8 p \u2211 (x,p)\u2208Train log P(p|x;\u03b8 \u03b8 \u03b8 p ) \u2212 \u03bb ||\u03b8 \u03b8 \u03b8 p || 2 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpretation via Semantic Parsing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We use L-BFGS to optimize the objective. Finally, relation completion is performed using a deterministic rule-based approach as in GEOS which handles implicit concepts like the \"Equals\" relation in the sentence \"Circle O has a radius of 5\" and coordinating conjunctions like \"bisect\" between the two lines and two angles in \"AM and CM bisect BAC and BCA\". We refer the interested reader to section 4.3 in Seo et al. (2015) for details. This semantic parser is used to identify program states in demonstrations as well as to map geometry questions to logical expressions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 405, |
|
"end": 422, |
|
"text": "Seo et al. (2015)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpretation via Semantic Parsing", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Given a demonstrative solution of a geometry problem in natural language such as the one shown in Figure 2 , we identify theorem applications by two simple heuristics. Often, theorem mentions in demonstrations collected from textbooks are labeled as references to theorems previously introduced in the textbook (for example, \"Theorem 3.1\"). In this case, we simply label the theorem application as the referenced theorem. Sometimes, the theorems are mentioned verbosely in the demonstration. To identify these mentions, we collect a set of theorem mentions from textbooks. Each theorem is also represented as a set of theorem mentions. Then, we use an off-the-shelf semantic text similarity system (\u0160ari\u0107 et al., 2012) and check if a contiguous sequence of sentences in the demonstration is a paraphrase of any of the gold theorem mentions. If the degree of similarity of a contiguous sequence of sentences in the demonstration with any of the gold theorem mentions is above a threshold, our system labels the sequence of sentences as the theorem. The text similarity system is tuned on the training dataset and the threshold is tuned on the development set. This heuristic works well and has a small error (< 10%) on our development set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 698, |
|
"end": 718, |
|
"text": "(\u0160ari\u0107 et al., 2012)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 106, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "State and Axiom Identification", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "For state identification, we use our semantic parser. The initial state corresponds to the logical expression corresponding to the question. Subsequent states are derived by parsing sentences in the demonstration. The identified state sequences are used to train our deductive solver.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "State and Axiom Identification", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "Our deductive solver, inspired from Krishnamurthy et al. 2016, uses the parsed state and axiom information (when provided) and learns to score the sequence of axiom applications which can lead to the solution of the problem. Our solver uses a log-linear model over the space of possible axiom applications. Given a set of theorems T and optionally demonstration d, we assume T = [t 1 ,t 2 , . . .t k ] to be a sequence of theorem applications. Each theorem application leads to a change in state. Let s 0 be the initial state determined by the logical formula derived from the question text and the diagram. Let s = [s 1 , s 2 , . . . s k ] be the sequence of program states after corresponding theorem applications. The final state s k contains the answer to the question. We define the model score of the deduction as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Deductive Solver", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "P(s|T, d;\u03b8 \u03b8 \u03b8 ex ) = 1 Z(T, d;\u03b8 \u03b8 \u03b8 ex ) k \u220f i=1 exp \u03b8 \u03b8 \u03b8 T ex \u03c8 \u03c8 \u03c8(s i\u22121 , s i ,t i , d)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Deductive Solver", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Here, \u03b8 \u03b8 \u03b8 ex represents the model parameters and \u03c8 \u03c8 \u03c8 represents the feature vector that depends on the successive states s i\u22121 and s i , the demonstration d and the corresponding theorem application t i . We find optimal parameters \u03b8 \u03b8 \u03b8 ex using maximumlikelihood estimation with L2 regularization:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Deductive Solver", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u03b8 \u03b8 \u03b8 * ex = arg max \u03b8 \u03b8 \u03b8 ex \u2211 s\u2208Train log P(s|T, d;\u03b8 \u03b8 \u03b8 ex ) \u2212 \u00b5||\u03b8 \u03b8 \u03b8 ex || 2 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Deductive Solver", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We use beam search for inference and L-BFGS to optimize the objective.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Deductive Solver", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Finally, we describe a joint model for semantic parsing and problem solving that parses the geometry problem text, the demonstration when available, and learns a sequence of theorem applications that can solve the problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Semantic Parsing and Deduction", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this case, we use a joint log-linear model for semantic parsing and deduction. The model comprises of factors that scores semantic parses of the question and the demonstration (when provided) and the other that scores various possible theorem applications. The model predicts the answer a given the question q (and possibly demonstration d) using two latent variables: p represents the latent semantic parse of the question and the demonstration which involves identifying the logical formula for the question (and for every state in the demonstration when provided) and s represents the (possibly latent) program.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Semantic Parsing and Deduction", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "P(p, s|q, a, d;\u03b8 \u03b8 \u03b8 ) \u221d f p (p|{q, a, d};\u03b8 \u03b8 \u03b8 p ) \u00d7 f s (s|T, d, ;\u03b8 \u03b8 \u03b8 s )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Semantic Parsing and Deduction", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Here, \u03b8 \u03b8 \u03b8 = {\u03b8 \u03b8 \u03b8 p ,\u03b8 \u03b8 \u03b8 ex }. f p and f s represent the factors for semantic parsing and deduction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Semantic Parsing and Deduction", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "f p (p|{q, a, d};\u03b8 \u03b8 \u03b8 p ) \u221d exp \u03b8 \u03b8 \u03b8 T p \u03c6 \u03c6 \u03c6 (p, {q, a, d}) and f s (s|T, d, ;\u03b8 \u03b8 \u03b8 s ) \u221d k \u220f i=1 exp \u03b8 \u03b8 \u03b8 T ex \u03c8 \u03c8 \u03c8(s i\u22121 , s i ,t i , d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Semantic Parsing and Deduction", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": ") as defined in Sections 4.1 and 4.2. Next, we describe approaches to learn the joint model with various kinds of supervision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Semantic Parsing and Deduction", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Our joint model for parsing and deduction can be learned using various kinds of supervision. We provide a learning algorithm when (a) we only have geometry question-answer pairs as supervision, (b) when we have geometry questions and demonstrations for solving them, and (c) mixed supervision: when we have a set of geometry question-answer pairs in addition to some geometry questions and demonstrations. To do this, we implement two supervision schemes (Krishnamurthy et al., 2016). The first supervision scheme only verifies the answer and treats other states in the supervision as latent. The second scheme verifies every state in the program. We combine both kinds of supervision when provided. Given super-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning from Types of Supervision", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "vision {q i , a i } n i=1 and {q i , a i .d i } m i=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning from Types of Supervision", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": ", we define the following L2 regularized objective:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning from Types of Supervision", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "J (\u03b8 \u03b8 \u03b8 ) = \u03bd n \u2211 i=1 log \u2211 p,s P(p, s|q i , a i ;\u03b8 \u03b8 \u03b8 ) \u00d7 1 exec(s)=a i +(1 \u2212 \u03bd) m \u2211 i=1 log \u2211 p,s P(p, s|q i , a i , d i ;\u03b8 \u03b8 \u03b8 ) \u00d7 1 s(d i )=s \u2212\u03bb ||\u03b8 \u03b8 \u03b8 p || 2 2 \u2212 \u00b5||\u03b8 \u03b8 \u03b8 ex || 2 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning from Types of Supervision", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "For learning from answers, we set \u03bd = 1. For learning from demonstrations, we set \u03bd = 0. We tune hyperparameters \u03bb , \u00b5 and \u03bd on a held out dev set. We use L-BFGS, using beam search for inference for training all our models. To avoid repeated usage of unnecessary theorems in the solution, we constrain the next theorem application to be distinct from previous theorem applications during beam search.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning from Types of Supervision", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Next, we define our feature set: \u03c6 \u03c6 \u03c6 1 , \u03c6 \u03c6 \u03c6 2 for learning the semantic parser and \u03c8 \u03c8 \u03c8 for learning the deduction model. Semantic parser features \u03c6 \u03c6 \u03c6 1 and \u03c6 \u03c6 \u03c6 2 are inspired from GEOS. The deduction model features \u03c8 \u03c8 \u03c8 score consecutive states in the deduction s i\u22121 , s i and the theorem t i which when applied to s i\u22121 leads to s i . \u03c8 \u03c8 \u03c8 comprises of features that score if theorem t i is applicable on state s i\u22121 and if the application of t i on state s i\u22121 leads to state s i . Table 2 lists the feature set.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 498, |
|
"end": 505, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "We collect a new dataset of demonstrations for solving geometry problems from a set of grade 6-10 Indian high school math textbooks by four publishers/authors -NCERT 1 , R S Aggarwal 2 , R D Sharma 3 and M L Aggarwal 4 -a total of 5 \u00d7 4 = 20 textbooks as well as a set of online geometry problems and solutions from three popular educational portals: Tiwari Academy 5 , School Lamp 6 and Oswaal Books 7 for grade 6-10 students in India. Millions of students in India study geometry from these books and portals every year and these materials are available online. We manually \u03c6 \u03c6 \u03c6 1 Lexicon Map Indicator that the word or phrase maps to a predicate in a lexicon created in GEOS. GEOS derives correspondences between words/phrases and geometry keywords and concepts in the geometry language using manual annotations in its training data. For instance, the lexicon contains (\"square\", square, Is-Square) including all possible concepts for the phrase \"square\". Regex for numbers and explicit variables Indicator that the word or phrase satisfies a regular expression to detect numbers or explicit variables (e.g. \"5\", \"AB\", \"O\"). These regular expressions were built as a part of GEOS. \u03c6 \u03c6 \u03c6 2 Dependency tree distance Shortest distance between the words of the concept nodes in the dependency tree. We use indicator features for distances of -3 to 3. Positive distance shows if the child word is at the right of the parent\u00e2\u0202\u0179s in the sentence, and negative otherwise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Demonstrations Dataset", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Distance between the words of the concept nodes in the sentence. Dependency edge Indicator functions for outgoing edges of the parent and child for the shortest path between them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word distance", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Indicator functions for the POS tags of the parent and the child", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Part of speech tag", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Indicator functions for unary / binary parent and child nodes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relation type", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Indicator functions for the return types of the parent and the child nodes. For example, return type of Equals is boolean, and that of LengthOf is numeric.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Return type", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "State and theorem premise predicates", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u03c8 \u03c8 \u03c8", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Treat the state s i\u22121 and theorem premise l", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u03c8 \u03c8 \u03c8", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(pr) t i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u03c8 \u03c8 \u03c8", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "as multi-sets of predicates. The feature is given by div(s i\u22121 ||l (pr) t i ), the divergence between the two multisets. div(A, B), the divergence between multi-sets A and B is given by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u03c8 \u03c8 \u03c8", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2211 k min(A k ,B k ) B k", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u03c8 \u03c8 \u03c8", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "which measures the degree to which the elements in A satisfy the pre-condition in B. State and theorem premise predicatearguments Now treat the state s i\u22121 and theorem premise l (pr) t i as two multi-sets over predicate-arguments. The feature is given by div(s i\u22121 ||l (pr) t i ), the divergence between the two multi-sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u03c8 \u03c8 \u03c8", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Now treat the state s i and theorem conclusion l (co) t i as two multi-sets over predicate-arguments. The feature is given by div(s i ||l (co) t i ), the divergence between the two multi-sets. State and theorem conclusion predicatearguments Now treat the state s i and theorem conclusion l (co) t i as two multi-sets over predicate-arguments. The feature is given by div(s i ||l (co) t i ), the divergence between the two multi-sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "State and theorem conclusion predicates", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Treat the state s i and theorem conclusion l (co) t i as two distributions over predicates. The feature is the total variation distance between the two distributions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "State and theorem conclusion predicates", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "State and theorem conclusion predicatearguments Now treat the state e i and theorem conclusion l (co) t i as two distributions over predicate-arguments. The feature is the total variation distance between the two distributions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "State and theorem conclusion predicates", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We additionally use three product features: \u03c8 \u03c8 \u03c8 1 \u03c8 \u03c8 \u03c8 3 \u03c8 \u03c8 \u03c8 5 , \u03c8 \u03c8 \u03c8 2 \u03c8 \u03c8 \u03c8 4 \u03c8 \u03c8 \u03c8 6 and \u03c8 \u03c8 \u03c8 1 \u03c8 \u03c8 \u03c8 2 \u03c8 \u03c8 \u03c8 3 \u03c8 \u03c8 \u03c8 4 \u03c8 \u03c8 \u03c8 5 \u03c8 \u03c8 \u03c8 6 Table 2 : The feature set for our joint semantic-parsing and deduction model. Features \u03c6 \u03c6 \u03c6 1 and \u03c6 \u03c6 \u03c6 2 are motivated from GEOS marked chapters relevant for geometry in these books and then parsed them using Adobe Acrobat's pdf2xml parser. Then, we manually extracted example problems leading to a total of 2235 geometry problems with demonstrations. We also annotated 1000 demonstrations by labeling the various states and theorem applications. We manually collected a set of theorems of geometry by going through the textbooks, and wrote them as horn clause rules. A total of 293 unique theorems were collected. Then, we marked contiguous sentences in the demonstration texts as one of these 293 theorems or as states. An example annotation for the running example in Figures 1 and 2 is provided in Figure 3 . Note that the annotation of states and theorem applications is not used in training our models and is only used for testing the accuracy of the programs induced by our model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 153, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 919, |
|
"end": 934, |
|
"text": "Figures 1 and 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 950, |
|
"end": 958, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use three geometry question datasets for evaluating our system: practice and official SAT style geometry questions used in GEOS, and an additional dataset of geometry questions collected from the aforementioned textbooks. We selected a total of 1406 SAT style questions across grades 6-10. This dataset is approximately 7.5 times the size of the datasets used in Seo et al. (2015) . We split the dataset into training (350 questions), development (150 questions) and test (906 questions) with equal proportion of grade 6-10 questions. We also annotated the training and development set questions with ground-truth logical forms. GEOS used 13 types of entities, 94 functions and predicates. We added some more entities, functions and predicates to cover other more complex concepts in geometry not covered in GEOS. Thus, we obtained a final set of 19 entity types and 115 functions and predicates. We use the training set to train our semantic parser with expanded set of entity types, functions and predicates. We used Stanford CoreNLP (Manning et al., 2014) for linguistic pre-processing. We also adapted the GEOS solver to the expanded set of entities, functions and predicates for comparison purposes. We call this system GEOS++.", |
|
"cite_spans": [ |
|
{ |
|
"start": 366, |
|
"end": 383, |
|
"text": "Seo et al. (2015)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 1039, |
|
"end": 1061, |
|
"text": "(Manning et al., 2014)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We evaluated our joint model of semantic parsing and deduction with various settings for training: training on question-answer pairs or demonstra- Table 3 : Scores of various approaches on the SAT practice (P) and official (O) datasets and a dataset of questions from the 20 textbooks (T). We use SAT\u00e2\u0202\u0179s grading scheme that rewards a correct answer with a score of 1.0 and penalizes a wrong answer with a negative score of 0.25. O.S. represents our system trained on question-answer (QA) pairs, demonstrations, or a combination of QA pairs and demonstrations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 154, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quantitative Results", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "tions alone, or with a combination of questionanswer pairs and demonstrations. We compare our joint semantic parsing and deduction models against GEOS and GEOS++.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quantitative Results", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "In the first setting, we only use question-answer pairs as supervision. We compare our semantic parsing and deduction model to GEOS and GEOS++ on practice and official SAT style geometry questions from Seo et al. (2015) as well as the dataset of geometry questions collected from the 20 textbooks (see Table 3 ). On all the three datasets, our system outperforms GEOS and GEOS++. Especially on the dataset from the 20 textbooks (which is a harder dataset and includes more problems which require complex reasoning supported by our deduction model), GEOS and GEOS++ do not perform very well whereas our system achieves a very good score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 219, |
|
"text": "Seo et al. (2015)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 309, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quantitative Results", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Next, we only use demonstrations to train our joint model (see Table 3 ). We test this model on the aforementioned datasets and compare it to GEOS and GEOS++ trained on respective datasets. Again, our system outperforms GEOS and GEOS++ on all three datasets. Especially on the textbook dataset, this model trained on demonstrations has significant improvements as our semantic parsing and deduction model trains the deduction model as well and learns to reason about geometry using axiomatic knowledge.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 70, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quantitative Results", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Finally, we train our semantic parsing and deduction model on a combination of question answer-pairs and demonstrations. This model trained on question-answer pairs and demonstrations leads to further improvements over models trained only question-answer pairs or only on demonstrations. These results (shown in Table 3 ) hold on all the three datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 312, |
|
"end": 319, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quantitative Results", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We tested the correctness of the parses and the Table 5 : Accuracy of the programs induced by various versions of our joint model trained on question-answer pairs, demonstrations or a combination of the two. We provide results when we use the deduction model or the joint model. deductive programs induced by our models. First, we compared the parses induced by our models with gold parses on the development set. Table 4 reports the Precision, Recall and F1 scores of the parses induced by our models when only the parsing model or when the joint model is used and compares it with GEOS. We conclude that both our models perform better as compared to GEOS in parsing. Furthermore, our joint model of parsing and deduction further improves the parsing accuracy. Then, we compared the programs induced by the aforementioned models with gold program annotations on the textbook dataset. Table 5 reports the accuracy of programs induced by various versions of our models. Our models when trained on demonstrations induces more accurate programs as compared to the semantic parsing and deduction model when trained on question-answer pairs. Moreover, the semantic parsing and deduction model when trained on question-answer pairs as well as demonstrations achieves an even better accuracy. Our joint model of parsing and deduction induces more accurate programs as compared to the deduction model alone.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 55, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 422, |
|
"text": "Table 4", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 886, |
|
"end": 893, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quantitative Results", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "A key benefit of our axiomatic solver is that it provides an easy-to-understand student-friendly demonstrative solution to geometry problems. This is important because students typically learn geometry by rigorous deduction whereas numerical solvers do not provide such interpretability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Study on Interpretability", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "To test the interpretability of our axiomatic solver, we asked 50 grade 6-10 students (10 stu-Interpretability Usefulness GEOS++ O.S. GEOS++ O.S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Study on Interpretability", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "2.7 3.0 2.9 3.2 Grade 7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grade 6", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3.0 3.7 3.3 3.6 Grade 8", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grade 6", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2.7 3.6 3.1 3.5 Grade 9", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grade 6", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2.4 3.4 3.0 3.6 Grade 10", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grade 6", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2.8 3.1 3.2 3.7 Overall 2.7 3.4 3.1 3.5 Table 6 : User study ratings for GEOS++ and our system (O.S.) trained on question-answer pairs and demonstrations by a number of grade 6-10 student subjects. Ten students in each grade were asked to rate the two systems on a scale of 1-5 on two facets: 'interpretability' and 'usefulness'. Each cell shows the mean rating computed over ten students in that grade for that facet. dents in each grade) to use GEOS++ and our best performing system trained on question-answer pairs and demonstrations as a web-based assistive tool. They were each asked to rate how 'interpretable' and 'useful' the two systems were for their studies on a scale of 1-5. Table 6 shows the mean rating by students in each grade on the two facets. We can observe that students of each grade found our system to be more interpretable as well as more useful to them than GEOS++. This study supports the need and the efficacy of an interpretable solution for geometry problems. Our solution can be used as an assistive tool for helping students learn geometry on MOOCs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 47, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 688, |
|
"end": 695, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Grade 6", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Solving Geometry Problems: Standardized tests have been recently proposed as 'drivers for progress in AI' . These tests are easily accessible, and measurable, and hence have attracted several NLP researchers. There is a growing body of work on solving standardized tests such as reading comprehensions (Richardson et al., 2013, inter alia) , science question answering (Clark, 2015; Schoenick et al., 2016, inter alia) , algebra word problems (Kushman et al., 2014; Roy and Roth, 2015, inter alia), geometry problems (Seo et al., 2014 (Seo et al., , 2015 and pre-university entrance exams (Fujita et al., 2014; Arai and Matsuzaki, 2014) . While the problem of using computers to solve geometry questions is old (Feigenbaum and Feldman, 1963; Schattschneider and King, 1997; Davis, 2006) , NLP and vision techniques were first used to solve geometry problems in Seo et al. (2015) . While Seo et al. (2014) only aligned geometric shapes with their textual mentions, Seo et al. (2015) also extracted geometric relations and built GEOS. We improve GEOS by building an axiomatic solver that performs deductive reasoning by learning from demonstrative problem solutions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 339, |
|
"text": "(Richardson et al., 2013, inter alia)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 369, |
|
"end": 382, |
|
"text": "(Clark, 2015;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 383, |
|
"end": 418, |
|
"text": "Schoenick et al., 2016, inter alia)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 517, |
|
"end": 534, |
|
"text": "(Seo et al., 2014", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 535, |
|
"end": 554, |
|
"text": "(Seo et al., , 2015", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 589, |
|
"end": 610, |
|
"text": "(Fujita et al., 2014;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 611, |
|
"end": 636, |
|
"text": "Arai and Matsuzaki, 2014)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 711, |
|
"end": 741, |
|
"text": "(Feigenbaum and Feldman, 1963;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 742, |
|
"end": 773, |
|
"text": "Schattschneider and King, 1997;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 774, |
|
"end": 786, |
|
"text": "Davis, 2006)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 861, |
|
"end": 878, |
|
"text": "Seo et al. (2015)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 887, |
|
"end": 904, |
|
"text": "Seo et al. (2014)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 964, |
|
"end": 981, |
|
"text": "Seo et al. (2015)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Learning from Demonstration: Our work follows the learning from demonstration view of machine learning (Schaal, 1997) which stems from the work on social learning in developmental psychology (Meltzoff and Moore, 1977; Meltzoff, 1995) . Learning from demonstration is a popular way of learning policies from example state to action mappings in robotics applications. Imitation learning (Schaal, 1999; Abbeel and Ng, 2004; Ross et al., 2011 ) is a popular instance of learning from demonstration where the algorithm observes a human expert perform a series of actions to accomplish the task and learns a policy that \"imitates\" the expert with the purpose of generalizing to unseen data. Imitation learning is increasingly being used in NLP (Vlachos and Clark, 2014; Berant and Liang, 2015; Augenstein et al., 2015; Beck et al., 2016; Goodman et al., 2016a,b) . However, all these models focus on learning respective NLP models from the final supervision e.g. semantic parses or denotations. However, we provide a technique to learn from demonstrations by learning a joint semantic parsing and deduction model. Another related line of work is Hixon et al. (2015) who acquire knowledge in the form of knowledge graphs for question answering from natural language dialogs and (Goldwasser and Roth, 2014) who propose a technique called learning from natural instructions. Learning from natural instructions allows human teachers to interact with an automated learner using natural instructions, allowing the teacher to communicate the domain expertise to the learner via natural language. However, this work was evaluated on a very simple Freecell game with a very small number of concepts (3). On the other hand, our model is evaluated on a real task of solving SAT style geometry problems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 117, |
|
"text": "(Schaal, 1997)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 217, |
|
"text": "(Meltzoff and Moore, 1977;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 233, |
|
"text": "Meltzoff, 1995)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 385, |
|
"end": 399, |
|
"text": "(Schaal, 1999;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 400, |
|
"end": 420, |
|
"text": "Abbeel and Ng, 2004;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 421, |
|
"end": 438, |
|
"text": "Ross et al., 2011", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 738, |
|
"end": 763, |
|
"text": "(Vlachos and Clark, 2014;", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 764, |
|
"end": 787, |
|
"text": "Berant and Liang, 2015;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 788, |
|
"end": 812, |
|
"text": "Augenstein et al., 2015;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 813, |
|
"end": 831, |
|
"text": "Beck et al., 2016;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 832, |
|
"end": 856, |
|
"text": "Goodman et al., 2016a,b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1140, |
|
"end": 1159, |
|
"text": "Hixon et al. (2015)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1271, |
|
"end": 1298, |
|
"text": "(Goldwasser and Roth, 2014)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Semantic Parsing: Semantic parsing is the NLP task of learning to map language to a formal meaning representation. Early semantic parsers learnt the parsing model from natural language utterances paired with logical forms Mooney, 1993, 1996; Kate et al., 2005, inter alia) . However, recently indirect supervision, such as denotations (Liang et al., 2011; Berant et al., 2013, inter alia) and natural language directions for robot navigation (Shimizu and Haas, 2009; Matuszek et al., 2010; Chen and Mooney, 2011, inter alia) are being used to train these semantic parsers. In most of the above examples, the execution model is fairly simple (e.g. execution of a SQL query in a database, or binary feedback for interaction of the robot with the environment). However, our work uses demonstrations such as those given in textbooks for learning a semantic parser. Furthermore, our work learns the semantic parser along with the execution model. In our case, the execution model is a program sequence constructed from a set of theorem applications. Thus, our work provides a way to integrate semantic parsing with probabilistic programming. This integration has been pursued before for science diagram question-answering on food-web networks (Krishnamurthy et al., 2016) -which is closely related to our work. Technically, our deductive solver and the approach of learning from different kinds of supervision are the same as the execution model in Krishnamurthy et al. (2016) . While Krishnamurthy et al. (2016) only has two program encodings, our work involves a much larger number of programs. We also provide an approach for learning from demonstrations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 241, |
|
"text": "Mooney, 1993, 1996;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 272, |
|
"text": "Kate et al., 2005, inter alia)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 355, |
|
"text": "(Liang et al., 2011;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 356, |
|
"end": 388, |
|
"text": "Berant et al., 2013, inter alia)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 466, |
|
"text": "(Shimizu and Haas, 2009;", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 467, |
|
"end": 489, |
|
"text": "Matuszek et al., 2010;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 490, |
|
"end": 524, |
|
"text": "Chen and Mooney, 2011, inter alia)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1444, |
|
"end": 1471, |
|
"text": "Krishnamurthy et al. (2016)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We described an approach that learns to solve SAT style geometry problems using detailed demonstrative solutions in natural language. The approach learns to jointly interpret demonstrations as well as how to use this interpretation to deductively solve geometry problems using axiomatic knowledge. Our approach showed significant improvements over the best previously published work on a number of datasets. A user-study conducted on a number of school students studying geometry found our approach to be more interpretable and useful than its predecessors. In the future, we would like to extend our work in other domains such as science QA (Jansen et al., 2016) and use our work to assist student learning on platforms such as MOOCs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 642, |
|
"end": 663, |
|
"text": "(Jansen et al., 2016)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "http://epathshala.nic.in/ e-pathshala-4/flipbook/ 2 http://www.amazon.in/ Books-R-S-Aggarwal/ 3 http://www.amazon.in/Books-R-Sharma/ 4 http://www.amazon.in/ Books-Aggarwal-M-L/ 5 http://www.tiwariacademy.com/ 6 http://www.schoollamp.com 7 http://www.oswaalbooks.com", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the anonymous reviewers for their valuable comments and suggestions. This work was supported by the following research grants: NSF IIS1447676, ONR N000141410684 and ONR N000141712463.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Apprenticeship learning via inverse reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Pieter", |
|
"middle": [], |
|
"last": "Abbeel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the twenty-first international conference on Machine learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pieter Abbeel and Andrew Y Ng. 2004. Apprentice- ship learning via inverse reinforcement learning. In Proceedings of the twenty-first international confer- ence on Machine learning. ACM, page 1.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Children benefit from modeling, demonstration, and explanation", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Allington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Cunningham", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R.L. Allington and P.M. Cunningham. 2010. Children benefit from modeling, demonstration, and explana- tion .", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The impact of ai on education-can a robot get into the university of tokyo?", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Noriko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takuya", |
|
"middle": [], |
|
"last": "Arai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Matsuzaki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proc. ICCE", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1034--1042", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Noriko H Arai and Takuya Matsuzaki. 2014. The im- pact of ai on education-can a robot get into the uni- versity of tokyo? In Proc. ICCE. pages 1034-1042.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A survey of robot learning from demonstration", |
|
"authors": [ |
|
{ |
|
"first": "Sonia", |
|
"middle": [], |
|
"last": "Brenna D Argall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manuela", |
|
"middle": [], |
|
"last": "Chernova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brett", |
|
"middle": [], |
|
"last": "Veloso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Browning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Robotics and autonomous systems", |
|
"volume": "57", |
|
"issue": "5", |
|
"pages": "469--483", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. 2009. A survey of robot learn- ing from demonstration. Robotics and autonomous systems 57(5):469-483.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Extracting relations between nonstandard entities using distant supervision and imitation learning", |
|
"authors": [ |
|
{ |
|
"first": "Isabelle", |
|
"middle": [], |
|
"last": "Augenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Maynard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "747--757", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Isabelle Augenstein, Andreas Vlachos, and Diana Maynard. 2015. Extracting relations between non- standard entities using distant supervision and im- itation learning. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing. Association for Computational Linguis- tics, pages 747-757.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "SHEF-MIME: word-level quality estimation using imitation learning", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Beck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gustavo", |
|
"middle": [], |
|
"last": "Paetzold", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the First Conference on Machine Translation, WMT 2016, colocated with ACL 2016", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "772--776", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Beck, Andreas Vlachos, Gustavo Paetzold, and Lucia Specia. 2016. SHEF-MIME: word-level qual- ity estimation using imitation learning. In Proceed- ings of the First Conference on Machine Translation, WMT 2016, colocated with ACL 2016, August 11- 12, Berlin, Germany. pages 772-776.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Semantic parsing on freebase from question-answer pairs", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Chou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Frostig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "2013", |
|
"issue": "", |
|
"pages": "1533--1544", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1533-1544.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Imitation learning of agenda-based semantic parsers", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "545--558", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Berant and Percy Liang. 2015. Imitation learning of agenda-based semantic parsers. Trans- actions of the Association for Computational Lin- guistics 3:545-558.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Children\u00e2\u0202\u0179s imitation of causal action sequences is influenced by statistical and pedagogical evidence", |
|
"authors": [ |
|
{ |
|
"first": "Daphna", |
|
"middle": [], |
|
"last": "Buchsbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alison", |
|
"middle": [], |
|
"last": "Gopnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shafto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Cognition", |
|
"volume": "120", |
|
"issue": "3", |
|
"pages": "331--340", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daphna Buchsbaum, Alison Gopnik, Thomas L Grif- fiths, and Patrick Shafto. 2011. Children\u00e2\u0202\u0179s im- itation of causal action sequences is influenced by statistical and pedagogical evidence. Cognition 120(3):331-340.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Issues, tasks and program structures to roadmap research in question & answering (q&a)", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Burger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vinay", |
|
"middle": [], |
|
"last": "Chaudhri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Gaizauskas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanda", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Israel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Jacquemin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Maiorano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Burger, Claire Cardie, Vinay Chaudhri, Robert Gaizauskas, Sanda Harabagiu, David Israel, Chris- tian Jacquemin, Chin-Yew Lin, Steve Maiorano, et al. 2001. Issues, tasks and program structures to roadmap research in question & answering (q&a) .", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Preschoolers use pedagogical cues to guide radical reorganization of category knowledge", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Lucas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellen", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Butler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Markman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Cognition", |
|
"volume": "130", |
|
"issue": "1", |
|
"pages": "116--127", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucas P Butler and Ellen M Markman. 2014. Preschoolers use pedagogical cues to guide radical reorganization of category knowledge. Cognition 130(1):116-127.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Learning to interpret natural language navigation instructions from observations", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI-2011)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "859--865", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David L. Chen and Raymond J. Mooney. 2011. Learn- ing to interpret natural language navigation instruc- tions from observations. In Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI- 2011). pages 859-865.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Elementary School Science and Math Tests as a Driver for AI:Take the Aristo Challenge!", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of IAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Clark. 2015. Elementary School Science and Math Tests as a Driver for AI:Take the Aristo Chal- lenge! In Proceedings of IAAI.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "My computer is an honor student -but how intelligent is it? standardized tests as a measure of ai", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of AI Magazine", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Clark and Oren Etzioni. 2016. My computer is an honor student -but how intelligent is it? stan- dardized tests as a measure of ai. In Proceedings of AI Magazine.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Geometry with computers", |
|
"authors": [ |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Davis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom Davis. 2006. Geometry with computers. Techni- cal report.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Computers and thought", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Edward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Feigenbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Feldman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1963, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward A Feigenbaum and Julian Feldman. 1963. Computers and thought. The AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The future of engineering education ii. teaching methods that work", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Richard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donald", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Felder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Woods", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armando", |
|
"middle": [], |
|
"last": "Stice", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rugarcia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Chemical Engineering Education", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "26--39", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard M Felder, Donald R Woods, James E Stice, and Armando Rugarcia. 2000. The future of en- gineering education ii. teaching methods that work. Chemical Engineering Education pages 26-39.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Overview of todai robot project and evaluation framework of its nlp-based problem solving", |
|
"authors": [ |
|
{ |
|
"first": "Akira", |
|
"middle": [], |
|
"last": "Fujita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akihiro", |
|
"middle": [], |
|
"last": "Kameda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ai", |
|
"middle": [], |
|
"last": "Kawazoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "World History", |
|
"volume": "36", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Akira Fujita, Akihiro Kameda, Ai Kawazoe, and Yusuke Miyao. 2014. Overview of todai robot project and evaluation framework of its nlp-based problem solving. World History 36:36.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Learning from natural instructions", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Goldwasser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Machine Learning", |
|
"volume": "94", |
|
"issue": "2", |
|
"pages": "205--232", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Goldwasser and Dan Roth. 2014. Learning from natural instructions. Machine Learning 94(2):205- 232.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Noise reduction and targeted exploration in imitation learning for abstract meaning representation parsing", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Naradowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Goodman, Andreas Vlachos, and Jason Narad- owsky. 2016a. Noise reduction and targeted explo- ration in imitation learning for abstract meaning rep- resentation parsing. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Ucl+ sheffield at semeval-2016 task 8: Imitation learning for amr parsing with an \u03b1bound", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Naradowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of SemEval pages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1167--1172", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Goodman, Andreas Vlachos, and Jason Narad- owsky. 2016b. Ucl+ sheffield at semeval-2016 task 8: Imitation learning for amr parsing with an \u03b1- bound. Proceedings of SemEval pages 1167-1172.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Learning knowledge graphs for question answering through conversational dialog", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Hixon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannaneh", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "851--861", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ben Hixon, Peter Clark, and Hannaneh Hajishirzi. 2015. Learning knowledge graphs for question an- swering through conversational dialog. In NAACL HLT 2015, The 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Den- ver, Colorado, USA, May 31 -June 5, 2015. pages 851-861. http://aclweb.org/anthology/N/N15/N15- 1086.pdf.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "What's in an explanation? characterizing knowledge and inference requirements for elementary science exams", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Jansen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niranjan", |
|
"middle": [], |
|
"last": "Balasubramanian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2956--2965", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Jansen, Niranjan Balasubramanian, Mihai Sur- deanu, and Peter Clark. 2016. What's in an explanation? characterizing knowledge and in- ference requirements for elementary science ex- ams. In COLING 2016, 26th International Con- ference on Computational Linguistics, Proceed- ings of the Conference: Technical Papers, Decem- ber 11-16, 2016, Osaka, Japan. pages 2956-2965. http://aclweb.org/anthology/C/C16/C16-1278.pdf.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Learning to transform natural to formal languages", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Rohit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuk", |
|
"middle": [], |
|
"last": "Kate", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wong", |
|
"middle": [], |
|
"last": "Wah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Raymond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of AAAI-05. Citeseer", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rohit J Kate, Yuk Wah, Wong Raymond, and J Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of AAAI-05. Cite- seer.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Semantic parsing to probabilistic programs for situated question answering", |
|
"authors": [ |
|
{ |
|
"first": "Jayant", |
|
"middle": [], |
|
"last": "Krishnamurthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oyvind", |
|
"middle": [], |
|
"last": "Tafjord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aniruddha", |
|
"middle": [], |
|
"last": "Kembhavi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "160--170", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jayant Krishnamurthy, Oyvind Tafjord, and Aniruddha Kembhavi. 2016. Semantic parsing to probabilistic programs for situated question answering. In Jian Su, Xavier Carreras, and Kevin Duh, editors, Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016. The As- sociation for Computational Linguistics, pages 160- 170.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Learning to automatically solve algebra word problems", |
|
"authors": [ |
|
{ |
|
"first": "Nate", |
|
"middle": [], |
|
"last": "Kushman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Artzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the Annual Meeting of the Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Learning dependency-based compositional semantics", |
|
"authors": [ |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Michael I Jordan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "590--599", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Percy Liang, Michael I Jordan, and Dan Klein. 2011. Learning dependency-based compositional seman- tics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies-Volume 1. Association for Computational Linguistics, pages 590-599.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "The Stanford CoreNLP natural language processing toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Association for Computational Linguistics (ACL) System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations. pages 55-60.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Following directions using statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Cynthia", |
|
"middle": [], |
|
"last": "Matuszek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dieter", |
|
"middle": [], |
|
"last": "Fox", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Koscher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "5th ACM/IEEE International Conference on Human-Robot Interaction (HRI)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "251--258", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cynthia Matuszek, Dieter Fox, and Karl Koscher. 2010. Following directions using statistical ma- chine translation. In 2010 5th ACM/IEEE Inter- national Conference on Human-Robot Interaction (HRI). IEEE, pages 251-258.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Understanding the intentions of others: re-enactment of intended acts by 18-month-old children", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Meltzoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Developmental psychology", |
|
"volume": "31", |
|
"issue": "5", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew N Meltzoff. 1995. Understanding the inten- tions of others: re-enactment of intended acts by 18-month-old children. Developmental psychology 31(5):838.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Imitation of facial and manual gestures by human neonates", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"Keith" |
|
], |
|
"last": "Meltzoff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Moore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "Science", |
|
"volume": "198", |
|
"issue": "4312", |
|
"pages": "75--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew N. Meltzoff and M. Keith Moore. 1977. Imitation of facial and manual gestures by human neonates. Science 198(4312):75-78.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Mctest: A challenge dataset for the open-domain machine comprehension of text", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erin", |
|
"middle": [], |
|
"last": "Burges", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Renshaw", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of Empirical Methods in Natural Lan- guage Processing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "A reduction of imitation learning and structured prediction to no-regret online learning", |
|
"authors": [ |
|
{ |
|
"first": "St\u00e9phane", |
|
"middle": [], |
|
"last": "Ross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Gordon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Drew", |
|
"middle": [], |
|
"last": "Bagnell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "627--635", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "St\u00e9phane Ross, Geoffrey J. Gordon, and Drew Bag- nell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the Fourteenth International Confer- ence on Artificial Intelligence and Statistics, AIS- TATS 2011, Fort Lauderdale, USA, April 11-13, 2011. pages 627-635.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Solving general arithmetic word problems", |
|
"authors": [ |
|
{ |
|
"first": "Subhro", |
|
"middle": [], |
|
"last": "Roy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Subhro Roy and Dan Roth. 2015. Solving gen- eral arithmetic word problems. In Proceedings of EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Learning from demonstration", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Schaal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Advances in Neural Information Processing Systems 9", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1040--1046", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefan Schaal. 1997. Learning from demonstration. In M. I. Jordan and T. Petsche, editors, Advances in Neural Information Processing Systems 9, MIT Press, pages 1040-1046.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Is imitation learning the route to humanoid robots?", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Schaal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Trends in cognitive sciences", |
|
"volume": "3", |
|
"issue": "6", |
|
"pages": "233--242", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefan Schaal. 1999. Is imitation learning the route to humanoid robots? Trends in cognitive sciences 3(6):233-242.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Geometry Turned On: Dynamic Software in Learning, Teaching, and Research", |
|
"authors": [ |
|
{ |
|
"first": "Doris", |
|
"middle": [], |
|
"last": "Schattschneider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "King", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Doris Schattschneider and James King. 1997. Geom- etry Turned On: Dynamic Software in Learning, Teaching, and Research. Mathematical Association of America Notes.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Moving beyond the turing test with the allen AI science challenge", |
|
"authors": [ |
|
{ |
|
"first": "Carissa", |
|
"middle": [], |
|
"last": "Schoenick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oyvind", |
|
"middle": [], |
|
"last": "Tafjord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Turney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carissa Schoenick, Peter Clark, Oyvind Tafjord, Peter D. Turney, and Oren Etzioni. 2016. Moving beyond the turing test with the allen AI science challenge. CoRR abs/1604.04315.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Diagram understanding in geometry questions", |
|
"authors": [ |
|
{ |
|
"first": "Min Joon", |
|
"middle": [], |
|
"last": "Seo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannaneh", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Farhadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Min Joon Seo, Hannaneh Hajishirzi, Ali Farhadi, and Oren Etzioni. 2014. Diagram understanding in ge- ometry questions. In Proceedings of AAAI.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Solving geometry problems: combining text and diagram interpretation", |
|
"authors": [ |
|
{ |
|
"first": "Min Joon", |
|
"middle": [], |
|
"last": "Seo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannaneh", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Farhadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clint", |
|
"middle": [], |
|
"last": "Malcolm", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Min Joon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. 2015. Solving geome- try problems: combining text and diagram interpre- tation. In Proceedings of EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Learning to follow navigational route instructions", |
|
"authors": [ |
|
{ |
|
"first": "Nobuyuki", |
|
"middle": [], |
|
"last": "Shimizu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Haas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "IJCAI 2009, Proceedings of the 21st International Joint Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1488--1493", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nobuyuki Shimizu and Andrew R. Haas. 2009. Learn- ing to follow navigational route instructions. In IJCAI 2009, Proceedings of the 21st Interna- tional Joint Conference on Artificial Intelligence, Pasadena, California, USA, July 11-17, 2009. pages 1488-1493.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "A new corpus and imitation learning framework for contextdependent semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "547--559", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Vlachos and Stephen Clark. 2014. A new cor- pus and imitation learning framework for context- dependent semantic parsing. Transactions of the As- sociation for Computational Linguistics 2:547-559.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Takelab: Systems for measuring semantic text similarity", |
|
"authors": [ |
|
{ |
|
"first": "Frane", |
|
"middle": [], |
|
"last": "\u0160ari\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Goran", |
|
"middle": [], |
|
"last": "Glava\u0161", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mladen", |
|
"middle": [], |
|
"last": "Karan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "441--448", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frane \u0160ari\u0107, Goran Glava\u0161, Mladen Karan, Jan \u0160na- jder, and Bojana Dalbelo Ba\u0161i\u0107. 2012. Takelab: Systems for measuring semantic text similar- ity. In Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012). Association for Computational Lin- guistics, Montr\u00e9al, Canada, pages 441-448. http://www.aclweb.org/anthology/S12-1060.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Learning semantic grammars with constructive inductive logic programming", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Zelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings of the 11th National Conference on Artificial Intelligence. Washington", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "817--822", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John M. Zelle and Raymond J. Mooney. 1993. Learn- ing semantic grammars with constructive inductive logic programming. In Proceedings of the 11th Na- tional Conference on Artificial Intelligence. Wash- ington, DC, USA, July 11-15, 1993.. pages 817-822.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Learning to parse database queries using inductive logic programming", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond J", |
|
"middle": [], |
|
"last": "Zelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the Thirteenth National Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John M Zelle and Raymond J Mooney. 1996. Learn- ing to parse database queries using inductive logic programming. In In Proceedings of the Thirteenth National Conference on Artificial Intelligence.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "circle O), liesOn( B, circle O), liesOn( C, circle O), liesOn( D, circle O) isLine(AB), isLine(BC), isLine(CA), isLine(BD), isLine(DA) isTriangle(ABC), isTriangle(ABD), isTriangle(AOM) measure( ADB, x), measure( MAO, 30 o ) measure( AMO, 90 o ) \u2026" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": ") = angle(ABD) + angle(DBC) Supplementary Angles perpendicular(AB,CD) \u2227 liesOn(C,AB) angle(ACD) + angle(DCB) = 180 \u2022 Vertically Opp. Angles intersectAt(AB, CD, M) angle(AMC) = angle(BMD)" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "State sequence corresponding to the demonstration in" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Examples of geometry theorems as horn clause rules." |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"2\">Deduction Joint</td></tr><tr><td>QA Pairs</td><td>0.56</td><td>0.61</td></tr><tr><td>Demonstrations</td><td>0.64</td><td>0.68</td></tr><tr><td>QA + Demonstrations</td><td>0.68</td><td>0.70</td></tr></table>", |
|
"html": null, |
|
"text": "Precision, Recall and F1 scores of the parses induced by GEOS and our models when only the parsing model or the joint model is used." |
|
} |
|
} |
|
} |
|
} |