|
--- |
|
license: mit |
|
--- |
|
## DocMath-Eval |
|
[**π Homepage**](https://docmath-eval.github.io/) | [**π€ Dataset**](https://huggingface.co/datasets/yale-nlp/DocMath-Eval) | [**π arXiv**](https://arxiv.org/abs/2311.09805) | [**GitHub**](https://github.com/yale-nlp/DocMath-Eval) |
|
|
|
The data for the paper [DocMath-Eval: Evaluating Math Reasoning Capabilities of LLMs in Understanding Long and Specialized Documents](https://arxiv.org/abs/2311.09805). |
|
**DocMath-Eval** is a comprehensive benchmark focused on numerical reasoning within specialized domains. It requires the model to comprehend long and specialized documents and perform numerical reasoning to answer the given question. |
|
|
|
<p align="center"> |
|
<img src="figures/overview.png" width="100%"> |
|
</p> |
|
|
|
## DocMath-Eval Dataset |
|
All the data examples were divided into four subsets: |
|
|
|
- **simpshort**, which is reannotated from [TAT-QA](https://aclanthology.org/2021.acl-long.254/) and [FinQA](https://aclanthology.org/2021.emnlp-main.300/), necessitates simple numerical reasoning over short document with one table |
|
- **simplong**, which is reannotated from [MultiHiertt](https://aclanthology.org/2022.acl-long.454/), necessitates simple numerical reasoning over long document with multiple tables; |
|
- **compshort**, which is reannotated from [TAT-HQA](https://aclanthology.org/2022.acl-long.5/), necessitates complex numerical reasoning over short document with one table; |
|
- **complong**, which is annotated from scratch by our team, necessitates complex numerical reasoning over long document with multiple tables. |
|
|
|
For each subset, we provide the *testmini* and *test* splits. |
|
|
|
You can download this dataset by the following command: |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("yale-nlp/DocMath-Eval") |
|
|
|
# print the first example on the complong testmini set |
|
print(dataset["complong-testmini"][0]) |
|
``` |
|
|
|
The dataset is provided in json format and contains the following attributes: |
|
|
|
``` |
|
{ |
|
"question_id": [string] The question id |
|
"source": [string] The original source of the example (for simpshort, simplong, and compshort sets) |
|
"original_question_id": [string] The original question id (for simpshort, simplong, and compshort sets) |
|
"question": [string] The question text |
|
"paragraphs": [list] List of paragraphs and tables within the document |
|
"table_evidence": [list] List of indices in 'paragraphs' that are used as table evidence for the question |
|
"paragraph_evidence": [list] List of indices in 'paragraphs' that are used as text evidence for the question |
|
"python_solution": [string] Python-format and executable solution. This feature is hidden for the test set |
|
"ground_truth": [float] Executed result of 'python_solution'. This feature is hidden for the test set |
|
} |
|
``` |
|
|
|
## Contact |
|
For any issues or questions, kindly email us at: Yilun Zhao ([email protected]). |
|
|
|
## Citation |
|
|
|
If you use the **DocMath-Eval** benchmark in your work, please kindly cite the paper: |
|
|
|
``` |
|
@misc{zhao2024docmatheval, |
|
title={DocMath-Eval: Evaluating Math Reasoning Capabilities of LLMs in Understanding Long and Specialized Documents}, |
|
author={Yilun Zhao and Yitao Long and Hongjun Liu and Ryo Kamoi and Linyong Nan and Lyuhao Chen and Yixin Liu and Xiangru Tang and Rui Zhang and Arman Cohan}, |
|
year={2024}, |
|
eprint={2311.09805}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL}, |
|
url={https://arxiv.org/abs/2311.09805}, |
|
} |
|
``` |