Jonaszky123
commited on
Commit
·
a899404
1
Parent(s):
76ada36
Upload Readme
Browse files- README.md +56 -1
- assets/dataset.png +3 -0
README.md
CHANGED
@@ -93,4 +93,59 @@ configs:
|
|
93 |
data_files:
|
94 |
- split: test
|
95 |
path: "L-CiteEval-Hardness/counting_stars.json"
|
96 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
93 |
data_files:
|
94 |
- split: test
|
95 |
path: "L-CiteEval-Hardness/counting_stars.json"
|
96 |
+
---
|
97 |
+
|
98 |
+
# L-CITEEVAL: DO LONG-CONTEXT MODELS TRULY LEVERAGE CONTEXT FOR RESPONDING?
|
99 |
+
**Paper** [![arXiv](https://img.shields.io/badge/arXiv-2410.02115-b31b1b.svg?style=plastic)](https://arxiv.org/abs/2410.02115) **Github** <a href="https://github.com/ZetangForward/L-CITEEVAL"><img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="Github" width="40" height="40"></a> **Zhihu** [![Zhihu](https://img.shields.io/badge/知乎-0079FF.svg?style=plastic&logo=zhihu&logoColor=white)](https://zhuanlan.zhihu.com/p/817442176)
|
100 |
+
|
101 |
+
## Benchmark Quickview
|
102 |
+
*L-CiteEval* is a multi-task long-context understanding with citation benchmark, covering **5 task categories**, including single-document question answering, multi-document question answering, summarization, dialogue understanding, and synthetic tasks, encompassing **11 different long-context tasks**. The context lengths for these tasks range from **8K to 48K**.
|
103 |
+
![](assets/dataset.png)
|
104 |
+
|
105 |
+
## Data Prepare
|
106 |
+
#### Load Data
|
107 |
+
```
|
108 |
+
from datasets import load_dataset
|
109 |
+
|
110 |
+
datasets = ["narrativeqa", "natural_questions", "hotpotqa", "2wikimultihopqa", "goc_report", "multi_news", "qmsum", "locomo", "dialsim", "counting_stars", "niah"]
|
111 |
+
|
112 |
+
for dataset in datasets:
|
113 |
+
### Load L-CiteEval
|
114 |
+
data = load_dataset('Jonaszky123/L-CiteEval', f"L-CiteEval-Data_{dataset}")
|
115 |
+
|
116 |
+
### Load L-CiteEval-Length
|
117 |
+
data = load_dataset('Jonaszky123/L-CiteEval', f"L-CiteEval-Length_{dataset}")
|
118 |
+
|
119 |
+
### Load L-CiteEval-Hardness
|
120 |
+
data = load_dataset('Jonaszky123/L-CiteEval', f"L-CiteEval-Hardness_{dataset}")
|
121 |
+
|
122 |
+
```
|
123 |
+
|
124 |
+
<!-- You can get the L-CiteEval data from [🤗 Hugging face](). Once downloaded, place the data in the dataset folder. -->
|
125 |
+
|
126 |
+
All data in L-CiteEval follows the format below:
|
127 |
+
```
|
128 |
+
{
|
129 |
+
"id": "The identifier for the data entry",
|
130 |
+
"question": "The task question, such as for single-document QA. In summarization tasks, this may be omitted",
|
131 |
+
"answer": "The correct or expected answer to the question, used for evaluating correctness",
|
132 |
+
"docs": "Context divided into fixed-length chunks"
|
133 |
+
"length": "The context length"
|
134 |
+
"hardness": "The level of diffulty in L-CiteEval-Hardness, which can be easy, medium and hard"
|
135 |
+
}
|
136 |
+
```
|
137 |
+
|
138 |
+
You can find evaluation code in our github.
|
139 |
+
|
140 |
+
## Citation
|
141 |
+
If you find our work helpful, please cite our paper:
|
142 |
+
```
|
143 |
+
@misc{tang2024lciteeval,
|
144 |
+
title={L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?},
|
145 |
+
author={Zecheng Tang and Keyan Zhou and Juntao Li and Baibei Ji and Jianye Hou and Min Zhang},
|
146 |
+
year={2024},
|
147 |
+
eprint={2410.02115},
|
148 |
+
archivePrefix={arXiv},
|
149 |
+
primaryClass={cs.CL}
|
150 |
+
}
|
151 |
+
```
|
assets/dataset.png
ADDED
Git LFS Details
|