Update README.md
Browse files
README.md
CHANGED
@@ -6,4 +6,63 @@ language:
|
|
6 |
- en
|
7 |
size_categories:
|
8 |
- 1K<n<10K
|
9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- en
|
7 |
size_categories:
|
8 |
- 1K<n<10K
|
9 |
+
---
|
10 |
+
|
11 |
+
|
12 |
+
## Dataset Summary
|
13 |
+
|
14 |
+
**RetrievalQA** is a short-form open-domain question answering (QA) dataset consisting of 1,271 questions covering new world and long-tail knowledge.
|
15 |
+
We ensure the knowledge necessary to answer the questions is absent from LLMs. Therefore, LLMs must truthfully decide whether to retrieve to be able to answer the questions correctly.
|
16 |
+
|
17 |
+
RetrievalQA enables us to evaluate the effectiveness of **adaptive retrieval-augmented generation (RAG)** approaches, an aspect predominantly overlooked
|
18 |
+
in prior studies and recent RAG evaluation systems, which focus only on task performance, the relevance of retrieval context or the faithfulness of answers.
|
19 |
+
|
20 |
+
|
21 |
+
## Dataset Sources
|
22 |
+
|
23 |
+
|
24 |
+
- **Repository:** https://github.com/hyintell/RetrievalQA
|
25 |
+
- **Paper:** https://arxiv.org/abs/2402.16457
|
26 |
+
|
27 |
+
|
28 |
+
|
29 |
+
## Dataset Structure
|
30 |
+
|
31 |
+
|
32 |
+
Here is an example of a data instance:
|
33 |
+
```json
|
34 |
+
{
|
35 |
+
"data_source": "realtimeqa",
|
36 |
+
"question_id": "realtimeqa_20231013_1",
|
37 |
+
"question": "What percentage of couples are 'sleep divorced', according to new research?",
|
38 |
+
"ground_truth": ["15%"],
|
39 |
+
"context": [
|
40 |
+
{
|
41 |
+
"title": "Do We Sleep Longer When We Share a Bed?",
|
42 |
+
"text": "1.4% of respondents have started a sleep divorce, or sleeping separately from their partner, and maintained it in the past year. Adults who have ..."
|
43 |
+
}, ...
|
44 |
+
]
|
45 |
+
}
|
46 |
+
```
|
47 |
+
|
48 |
+
where:
|
49 |
+
- `data_source`: the origin dataset of the question comes from
|
50 |
+
- `question`: the question
|
51 |
+
- `ground_truth`: a list of possible answers
|
52 |
+
- `context`: a list of dictionaries of retrieved relevant evidence. Note that the `title` of the document might be empty.
|
53 |
+
|
54 |
+
## Citation
|
55 |
+
|
56 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
57 |
+
|
58 |
+
```bibtex
|
59 |
+
@misc{zhang2024retrievalqa,
|
60 |
+
title={RetrievalQA: Assessing Adaptive Retrieval-Augmented Generation for Short-form Open-Domain Question Answering},
|
61 |
+
author={Zihan Zhang and Meng Fang and Ling Chen},
|
62 |
+
year={2024},
|
63 |
+
eprint={2402.16457},
|
64 |
+
archivePrefix={arXiv},
|
65 |
+
primaryClass={cs.CL}
|
66 |
+
}
|
67 |
+
```
|
68 |
+
|