atis-tableQA / README.md
vaishali's picture
Update README.md
8a6b5ce verified
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: query
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: table_names
sequence: string
- name: tables
sequence: string
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 6532402
num_examples: 384
- name: validation
num_bytes: 826593
num_examples: 45
- name: test
num_bytes: 1057831
num_examples: 86
download_size: 711704
dataset_size: 8416826
license: apache-2.0
task_categories:
- table-question-answering
language:
- en
size_categories:
- 1K<n<10K
tags:
- 'travel '
---
# Dataset Card for "atis-tableQA"
# Usage
```python
import pandas as pd
from datasets import load_dataset
atis_tableQA = load_dataset("vaishali/atis-tableQA")
for sample in atis_tableQA['train']:
question = sample['question']
sql_query = sample['query'],
answer = pd.read_json(sample['answer'], orient='split')
# flattened input
input_to_llm = sample["source"]
target = sample["target"]
```
# BibTeX entry and citation info
```
@inproceedings{pal-etal-2023-multitabqa,
title = "{M}ulti{T}ab{QA}: Generating Tabular Answers for Multi-Table Question Answering",
author = "Pal, Vaishali and
Yates, Andrew and
Kanoulas, Evangelos and
de Rijke, Maarten",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.348",
doi = "10.18653/v1/2023.acl-long.348",
pages = "6322--6334",
abstract = "Recent advances in tabular question answering (QA) with large language models are constrained in their coverage and only answer questions over a single table. However, real-world queries are complex in nature, often over multiple tables in a relational database or web page. Single table questions do not involve common table operations such as set operations, Cartesian products (joins), or nested queries. Furthermore, multi-table operations often result in a tabular output, which necessitates table generation capabilities of tabular QA models. To fill this gap, we propose a new task of answering questions over multiple tables. Our model, MultiTabQA, not only answers questions over multiple tables, but also generalizes to generate tabular answers. To enable effective training, we build a pre-training dataset comprising of 132,645 SQL queries and tabular answers. Further, we evaluate the generated tables by introducing table-specific metrics of varying strictness assessing various levels of granularity of the table structure. MultiTabQA outperforms state-of-the-art single table QA models adapted to a multi-table QA setting by finetuning on three datasets: Spider, Atis and GeoQuery.",
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)