|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:31:12.528599Z" |
|
}, |
|
"title": "Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data", |
|
"authors": [ |
|
{ |
|
"first": "Moshe", |
|
"middle": [], |
|
"last": "Hazoom", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Vibhor", |
|
"middle": [], |
|
"last": "Malik", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Columbia University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Bogin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Tel Aviv University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Most available semantic parsing datasets, comprising of pairs of natural utterances and logical forms, were collected solely for the purpose of training and evaluation of natural language understanding systems. As a result, they do not contain any of the richness and variety of natural-occurring utterances, where humans ask about data they need or are curious about. In this work, we release SEDE, a dataset with 12,023 pairs of utterances and SQL queries collected from real usage on the Stack Exchange website. We show that these pairs contain a variety of real-world challenges which were rarely reflected so far in any other semantic parsing dataset, propose an evaluation metric based on comparison of partial query clauses that is more suitable for realworld queries, and conduct experiments with strong baselines, showing a large gap between the performance on SEDE compared to other common datasets.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Most available semantic parsing datasets, comprising of pairs of natural utterances and logical forms, were collected solely for the purpose of training and evaluation of natural language understanding systems. As a result, they do not contain any of the richness and variety of natural-occurring utterances, where humans ask about data they need or are curious about. In this work, we release SEDE, a dataset with 12,023 pairs of utterances and SQL queries collected from real usage on the Stack Exchange website. We show that these pairs contain a variety of real-world challenges which were rarely reflected so far in any other semantic parsing dataset, propose an evaluation metric based on comparison of partial query clauses that is more suitable for realworld queries, and conduct experiments with strong baselines, showing a large gap between the performance on SEDE compared to other common datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Semantic parsing, the task of mapping natural language into logical forms that can be executed on a database or knowledge graph, has been studied mostly on academic datasets, where both the utterances and the queries were written as part of a dataset collection process (Hemphill et al., 1990; Zelle and Mooney, 1996; Yu et al., 2018) , and not in a natural process where users ask questions about data they need or are curious about. As a result, these datasets generally do not contain any of the richness and diversity of natural-occurring utterances, even if the data on which the questions are asked about is collected from a real-world source.", |
|
"cite_spans": [ |
|
{ |
|
"start": 270, |
|
"end": 293, |
|
"text": "(Hemphill et al., 1990;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 294, |
|
"end": 317, |
|
"text": "Zelle and Mooney, 1996;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 334, |
|
"text": "Yu et al., 2018)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recent methods (Wang et al., 2020a; Herzig et al., 2020; Yu et al., 2021) have significantly improved results on such academic datasets: state-ofthe-art models have yield impressive results of over Title: Questions which attract bad answers Description: posts which have attracted significantly more controversial or bad answers than good ones SELECT p.Id as [Post Link] , p.Score from ( SELECT p.ParentId, count( * ) as ContACnt from ( SELECT PostId, up = sum(case when VoteTypeId = 2 then 1 else 0 end), down = sum(case when VoteTypeId = 3 then 1 else 0 end) FROM Votes v join Posts p on p.Id = v.PostId WHERE VoteTypeId in (2,3) and PostTypeId = 2 group by PostId ) as ContA JOIN posts p on ContA.PostId = p.Id WHERE down > (up / ##UVDVRatio:int##) and (down + up) > ##MinVotes:int## GROUP BY p.ParentId ) as ContQ JOIN posts p on ContQ.ParentId = p.Id WHERE ContQ.ContACnt > (p.AnswerCount / 2) and p.AnswerCount > 1 ORDER BY Score desc Table 1 : Example from SEDE for a title and description given by the user, together with the SQL query that the user has written. 70%, for example, on Spider (Yu et al., 2018 ) in a challenging cross-domain setup, where models are trained and tested on different domains, and up to 80%-90% (Nguyen et al., 2021; Zhao and Huang, 2014) on single-domain datasets such as ATIS (Hemphill et al., 1990) and GeoQuery (Zelle and Mooney, 1996) . While the cross-domain, zeroshot setup introduces many generalization challenges such as non-explicit mentioning of column names and domain-specific phrases (Suhr et al., 2020; Deng et al., 2020) , we argue that even in the easier single-domain setup, it is still unclear how well state-of-the-art models generalize to the challenges that arise from real-world utterances and queries.", |
|
"cite_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 35, |
|
"text": "(Wang et al., 2020a;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 36, |
|
"end": 56, |
|
"text": "Herzig et al., 2020;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 57, |
|
"end": 73, |
|
"text": "Yu et al., 2021)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 359, |
|
"end": 370, |
|
"text": "[Post Link]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 417, |
|
"text": "( SELECT p.ParentId, count( * )", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1099, |
|
"end": 1115, |
|
"text": "(Yu et al., 2018", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1231, |
|
"end": 1252, |
|
"text": "(Nguyen et al., 2021;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1253, |
|
"end": 1274, |
|
"text": "Zhao and Huang, 2014)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 1314, |
|
"end": 1337, |
|
"text": "(Hemphill et al., 1990)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1342, |
|
"end": 1375, |
|
"text": "GeoQuery (Zelle and Mooney, 1996)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1535, |
|
"end": 1554, |
|
"text": "(Suhr et al., 2020;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1555, |
|
"end": 1573, |
|
"text": "Deng et al., 2020)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 941, |
|
"end": 948, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we take a significant step towards evaluation of Text-to-SQL models in a real-world setting, by releasing SEDE: a dataset comprised of 12,023 complex and diverse SQL queries and their natural language titles and descriptions, written by real users of the Stack Exchange Data Explorer out of a natural interaction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In Table 1 we show an example for a SQL query from SEDE, with its title and description. It introduces several challenges that have not been commonly addressed in currently available datasets: comparison between different subsets, complex usage of 2 nested sub-queries and an under-specified question, which doesn't state what \"significantly more\" means (solved in this case with an input parameter, ##UVDVRation##).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Compared to other Text-to-SQL datasets, we show that SEDE contains at least 10 times more SQL queries templates (queries after canonization and anonymization of values) than other datasets, and has the most diverse set of utterances and SQL queries (in terms of 3-grams) out of all singledomain datasets. We manually analyze a sample of examples from the dataset and list the introduced challenges, such as under-specification, usage of parameters in queries, dates manipulation and more.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We also address the challenging problem of evaluating naturally-occurring Text-to-SQL datasets. In academic datasets, standard evaluation metrics such as denotation accuracy and exact comparison of SQL components can often be used with relative success, but we found this to be a greater challenge in SEDE. Denotation accuracy is inaccurate for under-specified utterances, where any single clause not mentioned in the question could entirely change execution results, while exact match comparison of SQL components (e.g. comparing all SELECT, WHERE, GROUP BY and ORDER BY clauses) are often too strict when queries are highly complex. While solving these issues still remains an open problem, to at least partially address them we propose to measure a softer version of the exact match metric, PCM-F1, based on partially extracted queries components, and show that this metric gives a better indication of models' performance than common metrics, which yield a score that is close to 0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Finally, we test strong baselines on our dataset, and show that even models that get strong results on Spider's development set (63.2% Exact-Match, 86.3% PCM-F1), perform poorly on our dataset, with a PCM-F1 value of 50.6%. We hope that the unique and challenging properties exhibited in SEDE 1 will pave a path for future work on gen-1 Our dataset and code to run all experiments and metrics is eralization of Text-to-SQL models in real world setups.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the past decades, a broad selection of datasets have been used as benchmarks for semantic parsing: ATIS (Hemphill et al., 1990) , GeoQuery (Zelle and Mooney, 1996) , Restaurants (Tang and Mooney, 2000) , Scholar (Iyer et al., 2017) , Academic (Li and Jagadish, 2014) , Yelp and IMDB (Yaghmazadeh et al., 2017) , Advising (Finegan-Dollak et al., 2018) , WikiSQL (Zhong et al., 2017) , Spider (Yu et al., 2018) , WikiTableQuestions (Pasupat and Liang, 2015), Overnight (Wang et al., 2015) and more. However, the utterances and queries in all of these academic datasets, to the best of our knowledge, were collected explicitly for the purpose of evaluating semantic parsing models, usually with the help of crowd-sourcing (even though in most cases questions are asked about real data). As such, these academic datasets were generated in an artificial process, which often introduces various simplifications and artifacts which are not seen in real-life.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 130, |
|
"text": "(Hemphill et al., 1990)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 166, |
|
"text": "(Zelle and Mooney, 1996)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 204, |
|
"text": "Restaurants (Tang and Mooney, 2000)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 215, |
|
"end": 234, |
|
"text": "(Iyer et al., 2017)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 269, |
|
"text": "(Li and Jagadish, 2014)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 312, |
|
"text": "(Yaghmazadeh et al., 2017)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 324, |
|
"end": 353, |
|
"text": "(Finegan-Dollak et al., 2018)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 364, |
|
"end": 384, |
|
"text": "(Zhong et al., 2017)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 411, |
|
"text": "(Yu et al., 2018)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 489, |
|
"text": "(Wang et al., 2015)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Utterance-Query alignment One arising issue with this artificial process is that utterances are often aligned to their SQL queries counterparts, such that the columns and the required computations are explicitly mentioned (Suhr et al., 2020; Deng et al., 2020) . In contrast, natural utterances often do not explicitly mention these, since the schema of the database is not necessarily known to the asking user (for example, the question from Spider \"titles of films that include 'Deleted Scenes' in their special feature section\" might have been more naturally phrased as \"films with deleted scenes\" in a real-world setting).", |
|
"cite_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 241, |
|
"text": "(Suhr et al., 2020;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 260, |
|
"text": "Deng et al., 2020)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Well-specified utterances Furthermore, the utterances in academic datasets are mostly wellspecified, whereas in contrast, natural utterances are often under-specified or ambiguous; they could be interpreted in different ways and in turn be mapped to different SQL queries. Consider the example in Table 1 : the definition of \"bad answers\" is not well-defined, and in fact could be subjective. Since under-specified utterances, by definition, can not always be answered correctly, any human or machine attempting to answer such a question would have to either make an assumption available at https://github.com/hirupert/sede. on the requirement (usually based on previously seen examples) or ask follow-up questions in an interactive setting Elgohary et al., 2020 Elgohary et al., , 2021 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 741, |
|
"end": 762, |
|
"text": "Elgohary et al., 2020", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 763, |
|
"end": 786, |
|
"text": "Elgohary et al., , 2021", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 297, |
|
"end": 304, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Scope Last, in academic datasets the utterances are usually written by crowd-sourced workers, asked to provide utterances on various data domains which they do not necessarily need or are interested with. As a result, the utterances and queries are often not very diverse or realistic, are inherently limited in scope, and might not reflect real-world utterances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To introduce a realistic Text-to-SQL benchmark, we gather SQL queries together with their titles and descriptions from a naturally occurring dataset: the Stack Exchange Data Explorer. Stack Exchange is an online question & answers community, with over 3 million questions asked. The Data Explorer 2 allows any user to query the database of Stack Exchange with T-SQL (a SQL variant) to answer any question they are curious about. The database schema 3 is spread across 29 tables and 211 columns. Common utterance topics are published posts, comments, votes, tags, awards, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stack Exchange Data Explorer", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Any query that users run in the data explorer is logged, and users are able to save the queries with a title and description for future use by the public. All of these logs are available online, and Stack Exchange have agreed to release these queries, together with their title, description and other metadata. We publish our clean version of this log, which contains 12,023 samples, of which a subset of 1,714 examples is verified by humans to be correct and is used for validation and test. In this section, we explain the cleaning process, analyze the characteristics of the dataset and compare it to other semantic parsing datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stack Exchange Data Explorer", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The raw aggregated log contains over 1.6 million queries, however in its raw form many of the rows are duplicated or contain unusable queries or titles. The reason for this large difference between the original data size and the cleaned version is that any time that the author of the query executes it, an entry is saved to the log. This introduces two issues: First, many of the queries are not complete, since they were executed before writing the entire query (these incomplete queries are usually valid and executable, but are missing some expressions with respect to the given title and description). Second, after completing the writing of a correct query, users often keep changing and executing the query, but they do not update the title and description accordingly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data cleaning", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To alleviate these issues, we write rule-based filters that remove bad queries/descriptions pairs with high precision. For example, we filter out examples with numbers in the description, if these numbers do not appear in the query (refer to the preprocessing script in the repository for the complete list of filters and the number of examples each of them filter). Whenever a query has multiple versions due to multiple executions, we take the last executed query which passed all filters. After this filtering step, we are left with 12,309 examples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data cleaning", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Using these filters cleans most of the noise, but not all of it. To complete the cleaning process, we manually go over the examples in the validation and test sets, and either filter-out wrong examples or perform minimal changes to either the utterances or the queries (for example, fix a wrong textual value) to ensure that models are evaluated with correct data. Out of the 2,000 examples that we have evaluated, we have kept 1,024 and fixed 690 4 , leading to a total of 1,714 validated examples which we use for validation and test. While we do not perform verification on the training set, the verification procedure on the validation set allows us to estimate that most of the queries (85.7%) are either entirely accurate or need just a minimal change to be entirely accurate. For example, when the utterance is \"users in Brazil\" while the matching query contains the expression: WHERE users.location like %russia% we either change the utterance to \"users in russia\" or change the expression to WHERE users.location like %Brazil%. The final number of all training, validation and test examples is 12,023.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data cleaning", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In amples from SEDE and define 7 categories of introduced challenges. To quantify how often each of these concepts appear in SEDE in comparison to other datasets (SPIDER and ATIS), we sample a subset of equal size from each of the other datasets and count the appearances of these concepts. The analysis is shown in Table 2 . Next, we describe each of these concepts.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 316, |
|
"end": 323, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset Characteristics", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Under specification and Hidden assumptions Utterances in SEDE are often under-specified, that is, they could be interpreted in different ways. For example, when users write \"top users\", they might refer to users with the most reputation, but also to users that have written the most answers. Likewise, when users write \"last 500 posts\" they might expect to get just the title field of the posts, but possibly also IDs and dates. Similarly, query authors often add various assumptions to the queries which are not mentioned in the questions, because they require some knowledge of the available data. For example, they might filter out a special \"Community\" user in StackExchange, which should not be accounted for in computation of votes. We consider an utterance/query pair to be under-specified or contain an hidden assumption whenever the query contains an expression in any of the SQL clauses (SELECT, WHERE, etc.) which is not specified in the utterance, or where it is specified in an ambiguous way.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Characteristics", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Parameters In some cases, query authors can address under-specified utterances by letting the user fill in the under-specified parameters, which are marked in SEDE with either two hashtags (#) on each side of the parameter name, optionally including the required value type (int, string, etc.) and a default value (e.g. ##UserId:int##), or using a declared variable using SQL syntax (e.g. @UserId). For example, in Table 1 , the parameter ##UVDVRatio:int## is used to indicate that the user should fill in an integer to specify the ratio that \"significantly more\" refers to. More broadly, parameters are also helpful for re-usability, allowing users unfamiliar with a query to effortlessly change some values in it.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 415, |
|
"end": 422, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset Characteristics", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Window functions Window functions operate on a set of rows and return a single value for each row from the underlying query, thus allowing to perform various aggregation operators without the need for a separate aggregation query. Window functions are often used in SEDE to report percentiles of a specific value in a row, by using operators such as ROW_NUMBER() OVER, NTILE, TOP(X) PERCENT, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Characteristics", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Dates manipulation Queries in SEDE sometimes contain dates arithmetic expressions. See the example category query in Table 2 : this expression calculates the difference in seconds from the time the question was created to the time the answer was created.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 124, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset Characteristics", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Queries can perform any arbitrary numerical computation and text manipulation. The computations in SEDE often include multiple nested operators including rounding and conversions to float, for example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Numerical computations and text manipulation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ROUND(CAST(Main.Total AS FLOAT) / Meta.Total, 2) AS 'Ratio'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Numerical computations and text manipulation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Queries can also contain text manipulation such as concatenation, for example: 'stackoverflow.com/tags/' + t.tagName + '/info' as [Link] which builds a URL from a tag name. DECLARE/WITH SQL queries can be written as a procedural process, where multiple commands are executed sequentially. Query authors can store values in simple variables with DECLARE, but more importantly, they can store complete \"views\" of tables with the WITH command. While these commands do not add any expressivity (that is, any query can be written without these commands), they allow writing more clear and concise queries with less nested expressions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 136, |
|
"text": "[Link]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Numerical computations and text manipulation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1 \u2020 375K 209K 1 \u2020 488 \u2020 165.3 \u2020 Academic 196 \u2020 185 \u2020 3.0 \u2020 <1K <1K 1.04 \u2020 92 \u2020 2.1 \u2020 Advising 4,570 \u2020 211 \u2020 3.0 \u2020 20K 11.2K 1.18 \u2020 174 \u2020 20.3 \u2020 ATIS 5,280 \u2020 947 \u2020 3.8 \u2020 13.2K 5.8K 1.39 \u2020 751 \u2020 7.0 \u2020 GeoQuery 877 \u2020 246 \u2020 1.1 \u2020 1.5K 1.4K 2.03 \u2020 98 \u2020 8.9 \u2020 IMDB 131 \u2020 89 \u2020 1.9 \u2020 <1K <1K 1.01 \u2020 52 \u2020 2.5 \u2020 Restaurants 378 \u2020 23 \u2020 2.3 \u2020 <1K <1K 1.17 \u2020 17 \u2020 22.2 \u2020 Scholar 817 \u2020 193 \u2020 3.2 \u2020 2.6K 2.2K 1.02 \u2020 146 \u2020 5.6 \u2020 Yelp 128 \u2020 110 \u2020 1.0 \u2020 <1K <1K 1.0 \u2020 89 \u2020 1.4 \u2020 SEDE", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Numerical computations and text manipulation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "CASE The CASE clause is similar to an if-thenelse statement of any programming language, and is often used to either make the query more readable (e.g. by returning names of values instead of integers) or to perform conditional logic. For example, the clause in Table 2 (last row) counts negative scores using CASE function.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 269, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Numerical computations and text manipulation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Comparison In Table 2 we see that a vast majority of SEDE is not well-specified, which implies that in order for Text-to-SQL models to work robustly in a real-world setting, it should identify cases of ambiguity and possibly proceed with follow-up questions. We see that the rest of the concepts appear in 10% to 40% of SEDE examples, whereas these concepts are not exhibited in any other analyzed dataset. Next, we show a comparison of quantifiable metrics of popular Text-to-SQL datasets compared to SEDE in Table 3 . We see that SEDE is the largest dataset in terms of unique utterances and queries out of all single-domain datasets. To compare diversity and scope, we also measure the number of unique 3-grams for both the utterances and the queries, and see that SEDE has a very diverse set of SQL 3-grams, with almost 6 times the number of the next follower, Spider, and only 17% less than WikiSQL, which is 6.6 bigger in terms of queries. The number of utterance 3-gram is the second largest, after WikiSQL. Last, we count the number of unique SQL templates, as defined in Finegan-Dollak et al. 2018: we anonymize the values and group all canonized queries. We see that SEDE has more than 10 times templates than the follower Spider, and that the average number of queries per template is the lowest. We also see that SEDE is third in terms of average nesting level, after ATIS and GeoQuery.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 21, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 510, |
|
"end": 517, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Numerical computations and text manipulation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We note that in order to simulate the most realistic setting, an ideal Text-to-SQL dataset would include questions asked by users which are completely unaware of the schema, which are not SQL-savy, and that the person asking the question would be different than the person answering it. While this is not the case in SEDE, we believe its setting is still significantly more realistic that other datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitations", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Semantic parsing models are usually evaluated in two different forms: execution accuracy and logical forms accuracy. In this section, we show why using any of these metrics is difficult with complex queries such as those in SEDE, and propose a more loose metric for evaluation of models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Execution accuracy This metric is measured by executing both the predicted and gold query against a dataset, and considers the query to be correct if the two output results are the same (or similar enough). While this metric appears to be exactly what we want to optimize (yielding a query the outputs a correct output), it does not necessarily cope well with two challenges: spurious queries and under-specified questions. Spurious queries are incorrect queries (with respect to the given question) that happen to result in a correct answer, thus leading to a false-positive count. The problem of spuriousness can be addressed by executing the predicted query on modified versions of the dataset, as proposed in Zhong et al. (2020) . The second challenge, evaluating under-specification, is arguably harder to address, as mentioned in Subsection 3.2. For example, consider a question that asks for \"the top 1% active users\". This question does not specify which columns should be returned, how the rows should be ordered, and how does one measure \"being active\". As such, a query could be correct with respect to some interpretation, yet its execution result might be different than the execution result of the given gold query.", |
|
"cite_spans": [ |
|
{ |
|
"start": 713, |
|
"end": 732, |
|
"text": "Zhong et al. (2020)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Logical form accuracy Instead of comparing execution results, another frequent approach is to simply perform a textual comparison between the predicted and gold queries. When comparing SQL queries, it is common to perform a more loose comparison that does not consider the order of appearances of different clauses (e.g. it shouldn't matter which WHERE expression is written first), as performed in Spider (Yu et al., 2018) . However, as discussed in Zhong et al. (2020) , even this looser metric leads to false-negative measures, since multiple queries can all be correct with respect to an utterance, but written in various different manners. Due to the richness of SQL queries in SEDE, its extended scope and the fact that queries are written by many different authors, in our case this problem deteriorates: queries can be written in a substantial number of ways. For example, a query that contains a WITH statement could yield exactly the same result without it, by including a nested FROM clause instead.", |
|
"cite_spans": [ |
|
{ |
|
"start": 406, |
|
"end": 423, |
|
"text": "(Yu et al., 2018)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 451, |
|
"end": 470, |
|
"text": "Zhong et al. (2020)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this work, in order to alleviate the aforementioned issues with exact-match logical form evaluation, we loosen it so that models can get partial scores if at least some part of their predicted expressions are found in the gold query. We do this by parsing both the predicted query and the gold query, comparing different parts of the two parsed trees and aggregating the scores into a single met- ric, as defined next. We term this metric Partial Component Match F1 (PCM-F1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sub-tree elements matching", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Our proposed metric is based on the \"Component Matching\" metric which is used in Spider's evaluation , except that we use a parser that supports a large variety of queries (Spider's parser only supports specific types of queries), define how to compute the metric in a general way (not specific to any SQL-specific clause) and aggregate (average) the F1 scores into a single value, as defined next.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sub-tree elements matching", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We first use an open-source SQL parser, JSql-Parser, 5 to parse a given SQL query q into a tree, and extract a set of elements for each of its subtrees, considering a sub-tree only if all of its leaves are terminal values in the query (similar to extracting constituents from a parse tree). For example, as can be seen in Figure 1 , the predicted query q 1 has 7 relevant sub-trees (marked in rectangles). The sub-tree which represents the expression b=1 contains four elements: b,=,1 and b=1. We then split these sets into different categories, based on the SQL query part that the root of the original sub-tree belonged to, for each of the following categories: C = {SELECT, TOP, FROM, WHERE, GROUPBY, HAVING, ORDERBY}. We denote all sets of elements for a query q in a category c \u2208 C as s c (q). For example, as can be seen in Figure 1 , the clause s SELECT (q 1 ) yields 3 sub-trees. Given a predicted query q p and a gold query q g , we compute the average F1 metric of all aligned pairs of sets s c (q p ) and s c (q g ):", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 322, |
|
"end": 330, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 830, |
|
"end": 838, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sub-tree elements matching", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "PCM-F1(q p , q g ) = 1 | C | c\u2208C F 1 (s c (q p ), s c (q g )) Model Spider-Dev PCM-F1 PCM-EM PCM-F1-NOVALUES PCM-EM-NOVALUES EM", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sub-tree elements matching", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "RAT (Wang et al., 2020b) 88.1 37.3 91.3 69.0 69.7 \u2020 RAT+GAP (Shi et al., 2020) where F1 score is the harmonic mean of the precision and recall of the predicted sub-trees s c (q p ) with respect to the gold sub-trees s c (q g ). If for some category c, we get that s c (q p ) is an empty set but s c (q g ) is not, or vice-versa, we set F 1 = 0.0 for that category. Consider Figure 1 for an example. s SELECT (q 1 ) has 3 sub-trees while the gold category s SELECT (q 2 ) has 4 sub-trees. The predicted SELECT clause has 2 wrong sub-trees (a and a,b) leading to a precision p = 1 3 , and 2 missing elements leading to a recall r = 1 4 . Similarly, the WHERE clause gets a precision of p = 1 2 and a recall of r = 1 2 . Thus, we get F 1 = 0.285 for SELECT and F 1 = 0.5 for WHERE, leading to a final score PCM-F1 = 0.392.", |
|
"cite_spans": [ |
|
{ |
|
"start": 4, |
|
"end": 24, |
|
"text": "(Wang et al., 2020b)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 60, |
|
"end": 78, |
|
"text": "(Shi et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 374, |
|
"end": 382, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sub-tree elements matching", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Parsing Queries JSqlParser could only parse 93.2% of the validation SQL queries in SEDE, and 92.5% of the test queries. For that reason, for evaluation we only use the subset of queries which we can parse and evaluate 6 . During evaluation, if the predicted query was not parsed, it receives a score of 0. Note that this does not affect training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitations", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We note that our metric does not address at all the issue of false negatives -in fact, since it's a looser metric than the Exact Match metric, it is actually more prone to produce false negative outcomes. For SEDE, this issue could be mitigated by improving the similarity function that compares two queries, or by adapting the execution accuracy method in a way that will be less sensitive to instances of under-specification. We leave this challenge for future work. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "False negatives", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this section, we describe our experimental setup, test how strong baselines perform on SEDE, and analyze their errors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Most models in the Spider leaderboard 7 use a grammar-based decoder designed for Spider, and as a result, they cannot be used as-is on SEDE, which uses a larger grammar. Thus, following , we use a general-purpose pretrained sequence-to-sequence model, T5 (Raffel et al., 2020) , which was shown to be competitive with Spider's state-of-the-art models. Since all queries in SEDE come from a single schema which is seen during training time, it is not clear if allowing the model to access the schema during encoding and decoding is helpful. We thus experiment with two versions. In the first one, T5, the input is simply the utterance\u016b. In the second, T5 with schema, the input is the utterance\u016b followed by a separator token, and then the serialized schema. We follow Suhr et al. (2020) and serialize the schema by listing all tables in the schema and all the columns for each table, with a separator token between each column and table. Naturally, we did not evaluate T5 (without schema) on Spider since encoding the schema is crucial in a zero-shot Table 6 : Error analysis of gold queries vs. predicted queries for some selected dataset characteristics mentioned in 3. For brevity, in some of the examples we show only relevant parts of the query. setup. We perform textual pre-processing to the queries in SEDE before training (i.e. remove non UTF-8 characters and SQL comments, normalize spaces and new lines, normalize apostrophes, remove comments, etc.). We show results for experiments considering the titles alone, and ignore their given description, which are given in 14.6% of the examples. We have found that if we concatenate the description to the title, we get slightly worse results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 276, |
|
"text": "(Raffel et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 768, |
|
"end": 786, |
|
"text": "Suhr et al. (2020)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1051, |
|
"end": 1058, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimantal Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We use the SentencePiece (Kudo and Richardson, 2018 ) tokenizer, with its default vocabulary, for all models. We fine-tune the model to minimize the token-level cross-entropy loss against the gold SQL query for 60 epochs with the AdamW (Loshchilov and Hutter, 2019) optimizer and a learning rate of 5e \u22125 . We choose the best model based on the performance on the validation set for each dataset, using Exact-Match (EM) for Spider and PCM-F1 for SEDE. For inference, we use beam-search (of size 6) and choose the highestprobability generated SQL query. We show results for both T5-Base and T5-Large.", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 51, |
|
"text": "(Kudo and Richardson, 2018", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 236, |
|
"end": 265, |
|
"text": "(Loshchilov and Hutter, 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimantal Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For each experiment we measure PCM-F1 together with a modified version of it, PCM-EM (PCM exact match), that returns an accuracy of 1 for a given prediction if and only if the PCM-F1 value for that prediction is 1. For Spider, we use the officially provided script to measure the EM metric.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimantal Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We show experiments results for SEDE in Table 5 and for Spider in Table 4 . The results indicate that the performance gap between SEDE and Spider is large: while T5-Large reaches a score of 63.2 EM on Spider's validation set, not very far from the state-of-the-art (a difference of 8.6 points), and a PCM-F1 of 86.3, when trained on SEDE, it only receives 48.2 and 50.6 PCM-F1 on the validation and test set of SEDE, respectively. This supports our main claim, that single-schema datasets could still impose a substantial challenge when tested in a realistic setup. We also notice in Table 4 that large improvements in EM do not necessarily imply a large increase in PCM-F1, since PCM-F1 numbers are already high for Spider in any of the tested models, implying that the model is generating SQL queries that are close to the exact gold SQL, only different by a small change (e.g. value or column name).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 47, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 66, |
|
"end": 73, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 584, |
|
"end": 591, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Main Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Comparing experiments with and without encoding the schema shows that encoding the schema does not significantly improve results in this singledomain setup. We also observe that PCM-EM is close to 0 in all experiments, supporting our motivation to create a loosened evaluation metric.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Main Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In order to validate the correctness of our proposed evaluation metric, we compare PCM-EM with the more established EM metric of Spider. There are two differences in the way EM is calculated compared to PCM-EM: (1) EM anonymizes all values in the queries and (2) EM ignores the ON expressions in the JOIN clauses. For those reasons, we define PCM-F1-NOVALUES and PCM-EM-NOVALUES, modified versions of PCM-F1 and PCM-EM, respectively, such that all values in the SQL are anonymized and the ON expressions are ignored. Table 4 shows that EM and PCM-EM-NOVALUES are only different by up to 0.7 points for all models, showing that PCM-F1 is well calibrated with Spider's EM.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 517, |
|
"end": 524, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PCM-F1 Validation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Next, we analyze errors and successful outputs of the model. Table 6 shows examples of gold vs. predicted queries by our model, with respect to some of the introduced challenges mentioned in 3.2.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 68, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "We can see from the first example that the model is often wrong whenever the question is not specified well: In this example, this happens in the SELECT, WHERE and ORDER fields. In the SELECT clause, the model predicts extra columns in comparison to the gold query, most likely as it has learned to do so for similar questions. In addition, since the desired order of the results are not mentioned in the utterance, it leads to a different predicted ORDER BY clause. A hidden assumption the author had added to the query is taking into account only open questions (i.e. questions with no close date: ClosedDate is null). The model, which could not deduce this assumption from the utterance alone, predicts a wrong filter expression CreationDate > '2018-01-01'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "The second example shows how the model correctly uses the DATEDIFF function to manipulate dates, although it predicted a wrong computation of the percentage (i.e. without the SUM function).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "The last example shows how the model generates a SQL query with parameters, for the number of required users (with a predicted default value of 100) and for the tag name. In this case, the predicted query is possibly better than the gold one as it uses a reusable parameter instead of a fixed one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "In this work, we take a significant step towards improving and evaluating Text-to-SQL models in a real world setting, by releasing SEDE, a dataset comprised of real-world complex and diverse SQL queries with their utterances, naturally written by real users. We show that there's a large gap between the performance of strong Text-to-SQL baselines on SEDE compared to the commonly studied dataset Spider, and hope that the release of this challenging dataset will encourage research on improving generalization for real-world SQL prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Publicly available at https://data. stackexchange.com/ 3 https://tinyurl.com/sedeschema", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We publish both the original and the fixed examples", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/JSQLParser/ JSqlParser", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "While we did not use the rest of the validation queries, we have released them in the dataset for future use, assuming at least some of them are valid queries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://yale-lily.github.io/spider", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Acknowledgments We thank Kevin Montrose and the rest of the Stack Exchange team for providing the raw query log.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Structure-grounded pretraining for text-to-sql", |
|
"authors": [ |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Hassan Awadallah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Meek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Polozov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huan", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiang Deng, Ahmed Hassan Awadallah, Chris Meek, Alex Polozov, Huan Sun, and Matthew Richardson. 2020. Structure-grounded pretraining for text-to-sql. Technical Report 2010.12773, Arxiv.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Speak to your parser: Interactive text-to-sql with natural language feedback", |
|
"authors": [ |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Elgohary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saghar", |
|
"middle": [], |
|
"last": "Hosseini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [ |
|
"Hassan" |
|
], |
|
"last": "Awadallah", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ahmed Elgohary, Saghar Hosseini, and Ahmed Hassan Awadallah. 2020. Speak to your parser: Interactive text-to-sql with natural language feedback.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "NL-EDIT: Correcting semantic parse errors through natural language interaction", |
|
"authors": [ |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Elgohary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Meek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Fourney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gonzalo", |
|
"middle": [], |
|
"last": "Ramos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [ |
|
"Hassan" |
|
], |
|
"last": "Awadallah", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5599--5610", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ahmed Elgohary, Christopher Meek, Matthew Richardson, Adam Fourney, Gonzalo Ramos, and Ahmed Hassan Awadallah. 2021. NL-EDIT: Correcting semantic parse errors through natural language interaction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5599-5610, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Improving text-to-SQL evaluation methodology", |
|
"authors": [ |
|
{ |
|
"first": "Catherine", |
|
"middle": [], |
|
"last": "Finegan-Dollak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Kummerfeld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karthik", |
|
"middle": [], |
|
"last": "Ramanathan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sesh", |
|
"middle": [], |
|
"last": "Sadasivam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "351--360", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1033" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving text-to-SQL evaluation methodology. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 351-360, Melbourne, Australia. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The ATIS spoken language systems pilot corpus", |
|
"authors": [ |
|
{ |
|
"first": "Charles", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Hemphill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Godfrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Doddington", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spoken language sys- tems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "TaPas: Weakly supervised table parsing via pre-training", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Herzig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krzysztof", |
|
"middle": [], |
|
"last": "Nowak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "M\u00fcller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francesco", |
|
"middle": [], |
|
"last": "Piccinno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Eisenschlos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4320--4333", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.398" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Herzig, Pawel Krzysztof Nowak, Thomas M\u00fcller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 4320-4333, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Learning a neural semantic parser from user feedback", |
|
"authors": [ |
|
{ |
|
"first": "Srinivasan", |
|
"middle": [], |
|
"last": "Iyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ioannis", |
|
"middle": [], |
|
"last": "Konstas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alvin", |
|
"middle": [], |
|
"last": "Cheung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jayant", |
|
"middle": [], |
|
"last": "Krishnamurthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learn- ing a neural semantic parser from user feedback. CoRR, abs/1704.08760.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", |
|
"authors": [ |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "66--71", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-2012" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Constructing an interactive natural language interface for relational databases", |
|
"authors": [ |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Jagadish", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proc. VLDB Endow", |
|
"volume": "8", |
|
"issue": "1", |
|
"pages": "73--84", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.14778/2735461.2735468" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fei Li and H. V. Jagadish. 2014. Constructing an interactive natural language interface for relational databases. Proc. VLDB Endow., 8(1):73-84.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Decoupled weight decay regularization", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Loshchilov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Hutter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Phrasetransformer: Self-attention using local context for semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Phuong", |
|
"middle": [ |
|
"Minh" |
|
], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vu", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minh", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phuong Minh Nguyen, Vu Tran, and Minh Le Nguyen. 2021. Phrasetransformer: Self-attention using local context for semantic parsing.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Compositional semantic parsing on semi-structured tables", |
|
"authors": [ |
|
{ |
|
"first": "Panupong", |
|
"middle": [], |
|
"last": "Pasupat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1470--1480", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P15-1142" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Panupong Pasupat and Percy Liang. 2015. Compo- sitional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1470-1480, Beijing, China. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Exploring the limits of transfer learning with a unified text-to", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Raffel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharan", |
|
"middle": [], |
|
"last": "Narang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Matena", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanqi", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Compositional generalization and natural language variation: Can a semantic parsing approach handle both?", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Shaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Panupong", |
|
"middle": [], |
|
"last": "Pasupat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Shaw, Ming-Wei Chang, Panupong Pasupat, and Kristina Toutanova. 2020. Compositional general- ization and natural language variation: Can a seman- tic parsing approach handle both?", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Cicero Nogueira dos Santos, and Bing Xiang. 2020. Learning contextual representations for semantic parsing with generation", |
|
"authors": [ |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiguo", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Henghui", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"Hanbo" |
|
], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng Shi, Patrick Ng, Zhiguo Wang, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Cicero Nogueira dos Santos, and Bing Xiang. 2020. Learning con- textual representations for semantic parsing with generation-augmented pre-training.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Exploring unexplored generalization challenges for cross-database semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Alane", |
|
"middle": [], |
|
"last": "Suhr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Shaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8372--8388", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.742" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alane Suhr, Ming-Wei Chang, Peter Shaw, and Ken- ton Lee. 2020. Exploring unexplored generalization challenges for cross-database semantic parsing. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8372- 8388, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Automated construction of database interfaces: Integrating statistical and relational learning for semantic parsing. EMNLP '00", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Lappoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "133--141", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1117794.1117811" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lappoon R. Tang and Raymond J. Mooney. 2000. Au- tomated construction of database interfaces: Inte- grating statistical and relational learning for seman- tic parsing. EMNLP '00, page 133-141, USA. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "RAT-SQL: Relation-aware schema encoding and linking for text-to-SQL parsers", |
|
"authors": [ |
|
{ |
|
"first": "Bailin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Shin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oleksandr", |
|
"middle": [], |
|
"last": "Polozov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7567--7578", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.677" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020a. RAT- SQL: Relation-aware schema encoding and linking for text-to-SQL parsers. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 7567-7578, Online. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Ratsql: Relation-aware schema encoding and linking for text-to-sql parsers", |
|
"authors": [ |
|
{ |
|
"first": "Bailin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Shin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oleksandr", |
|
"middle": [], |
|
"last": "Polozov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020b. Rat- sql: Relation-aware schema encoding and linking for text-to-sql parsers.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Building a semantic parser overnight", |
|
"authors": [ |
|
{ |
|
"first": "Yushi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1332--1342", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P15-1129" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 1332-1342, Beijing, China. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Sqlizer: Query synthesis from natural language", |
|
"authors": [ |
|
{ |
|
"first": "Navid", |
|
"middle": [], |
|
"last": "Yaghmazadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuepeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isil", |
|
"middle": [], |
|
"last": "Dillig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Dillig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proc. ACM Program. Lang", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3133887" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, and Thomas Dillig. 2017. Sqlizer: Query synthesis from natural language. Proc. ACM Program. Lang., 1(OOPSLA).", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Model-based interactive semantic parsing: A unified framework and a text-to-SQL case study", |
|
"authors": [ |
|
{ |
|
"first": "Ziyu", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huan", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5447--5458", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1547" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ziyu Yao, Yu Su, Huan Sun, and Wen-tau Yih. 2019. Model-based interactive semantic parsing: A uni- fied framework and a text-to-SQL case study. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5447- 5458, Hong Kong, China. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Dragomir Radev, richard socher, and Caiming Xiong. 2021. GraPPa: Grammar-augmented pre-training for table semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chien-Sheng", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xi", |
|
"middle": [], |
|
"last": "Victoria Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Chern Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xinyi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tao Yu, Chien-Sheng Wu, Xi Victoria Lin, bailin wang, Yi Chern Tan, Xinyi Yang, Dragomir Radev, richard socher, and Caiming Xiong. 2021. GraPPa: Grammar-augmented pre-training for table semantic parsing. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Spider: A largescale human-labeled dataset for complex and crossdomain semantic parsing and text-to-SQL task", |
|
"authors": [ |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michihiro", |
|
"middle": [], |
|
"last": "Yasunaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongxu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zifan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Irene", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qingning", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shanelle", |
|
"middle": [], |
|
"last": "Roman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zilin", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3911--3921", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1425" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large- scale human-labeled dataset for complex and cross- domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911-3921, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Spider: A largescale human-labeled dataset for complex and crossdomain semantic parsing and text-to-sql task", |
|
"authors": [ |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michihiro", |
|
"middle": [], |
|
"last": "Yasunaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongxu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zifan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Irene", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qingning", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shanelle", |
|
"middle": [], |
|
"last": "Roman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zilin", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2019. Spider: A large- scale human-labeled dataset for complex and cross- domain semantic parsing and text-to-sql task.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Learning to parse database queries using inductive logic programming", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Zelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the Thirteenth National Conference on Artificial Intelligence", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1050--1055", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John M. Zelle and Raymond J. Mooney. 1996. Learn- ing to parse database queries using inductive logic programming. In Proceedings of the Thirteenth Na- tional Conference on Artificial Intelligence -Volume 2, AAAI'96, page 1050-1055. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Type-driven incremental semantic parsing with polymorphism", |
|
"authors": [ |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kai Zhao and Liang Huang. 2014. Type-driven incre- mental semantic parsing with polymorphism.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Semantic evaluation for text-to-SQL with distilled test suites", |
|
"authors": [ |
|
{ |
|
"first": "Ruiqi", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.29" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ruiqi Zhong, Tao Yu, and Dan Klein. 2020. Semantic evaluation for text-to-SQL with distilled test suites.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "396--411", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 396-411, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Seq2sql: Generating structured queries from natural language using reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "An example for sub-tree matching." |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"content": "<table><tr><td>Category</td><td/><td>Dataset</td><td/><td/><td>Example test cases</td></tr><tr><td/><td colspan=\"3\">SEDE Spider ATIS</td><td>Title</td><td>SQL query</td></tr><tr><td>Under specification and Hid-den assumptions</td><td>87</td><td>14</td><td>15</td><td>User List: Highest downvotes per votes day ratio with minimum down-</td><td>WHERE id <> -1</td></tr><tr><td>Parameters</td><td>40</td><td>0</td><td>0</td><td>Rollbacks by a certain user</td><td>WHERE UserId = @UserId</td></tr><tr><td>Window functions</td><td>8</td><td>0</td><td>0</td><td/><td/></tr><tr><td>Dates manipulation</td><td>15</td><td>0</td><td>0</td><td>Quickest new contributor answers to new contributor questions</td><td>DATEDIFF(s, Q.CreationDate, A.CreationDate)</td></tr><tr><td>Numerical and text manipulation computations</td><td>35</td><td>0</td><td>0</td><td colspan=\"2\">Average Number of Views per Tag sum(p.ViewCount)/count( * )</td></tr><tr><td>DECLARE/WITH</td><td>11</td><td>0</td><td>0</td><td>Rollbacks by a certain user</td><td>DECLARE @UserId AS int = ##UserId:int##</td></tr><tr><td>CASE</td><td>10</td><td>0</td><td>0</td><td>Questions and answers per year</td><td>CASE WHEN Score < 0 THEN 1 ELSE 0 END</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "this sub-section, we quantify and analyze the introduced challenges in SEDE, compared to other commonly used semantic parsing datasets. First, we manually analyze a sample of 100 ex-List of users in the Philippines. DENSE_RANK() OVER (ORDER BY Reputation DESC)" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Dataset characteristics comparison of randomly selected 100 samples among SEDE and other popular Text-to-SQL datasets." |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Comparison of different semantic parsing datasets (for Spider, analysis is performed on training and validation sets only). \u2020 denotes that numbers are reported fromFinegan-Dollak et al. (2018). Average Unique Queries / template denotes the number of different SQL queries per template, thus lower means more diversity in the dataset. Datasets above dashed line are cross-domain, and below it are single-domain." |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Results on Spider with various metrics. While we do not focus on Spider, we show our results for comparison of the model and evaluation metric with a known benchmark. \u2020 denotes reported numbers from Spider's official leaderboard. PCM-F1-NoValues and PCM-EM-NoValues are modified versions of PCM-F1 and PCM-EM, respectively, such that all values in the SQL are anonymized and the ON clause is ignored, in order to compare with Spider's official Exact-Match (EM) metric." |
|
}, |
|
"TABREF8": { |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Results on SEDE development and test sets." |
|
} |
|
} |
|
} |
|
} |