zhuexe commited on
Commit
af777ae
·
verified ·
1 Parent(s): e31a89b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +249 -3
README.md CHANGED
@@ -1,3 +1,249 @@
1
- ---
2
- license: cc-by-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - en
7
+ pretty_name: FanoutQA
8
+ size_categories:
9
+ - 1K<n<10K
10
+ ---
11
+
12
+ # FanOutQA
13
+
14
+ ![PyPI](https://img.shields.io/pypi/v/fanoutqa)
15
+
16
+ [Read the paper!](https://aclanthology.org/2024.acl-short.2/) | [Download the dataset!](https://github.com/zhudotexe/fanoutqa/tree/main/fanoutqa/data)
17
+
18
+ > [!NOTE]
19
+ > This HuggingFace repo is a mirror of our GitHub repo's data. We recommend using the `fanoutqa` Python package to interact with the dataset.
20
+
21
+ FanOutQA is a high quality, multi-hop, multi-document benchmark for large language models using English Wikipedia as its
22
+ knowledge base. Compared to other question-answering benchmarks, FanOutQA requires reasoning over a greater number of
23
+ documents, with the benchmark's main focus being on the titular fan-out style of question. We present these questions
24
+ in three tasks -- closed-book, open-book, and evidence-provided -- which measure different abilities of LLM systems.
25
+
26
+ This repository contains utilities to download and work with the dataset in Python, along with implementations of the
27
+ evaluation metrics presented in our paper. Alternatively, you can download the dev and test sets in JSON format and
28
+ generate completions to submit to us for evaluation.
29
+
30
+ To view the leaderboards and more documentation about how to use this dataset, check out our website
31
+ at <https://fanoutqa.com>!
32
+
33
+ ## Requirements and Installation
34
+
35
+ The `fanoutqa` package requires Python 3.8+.
36
+
37
+ To work with just the data, use `pip install fanoutqa`.
38
+ Use `pip install "fanoutqa[all]"` and read the following section to include a baseline retriever and evaluation metrics.
39
+
40
+ ### Optional
41
+
42
+ To include a baseline BM25-based retriever, use `pip install "fanoutqa[retrieval]"`.
43
+
44
+ To run evaluations on the dev set, you will need to run a couple more steps:
45
+
46
+ ```shell
47
+ pip install "fanoutqa[eval]"
48
+ python -m spacy download en_core_web_sm
49
+ pip install "bleurt @ git+https://github.com/google-research/bleurt.git@master"
50
+ wget https://storage.googleapis.com/bleurt-oss-21/BLEURT-20.zip
51
+ unzip BLEURT-20.zip
52
+ rm BLEURT-20.zip
53
+ ```
54
+
55
+ ## Quickstart
56
+
57
+ 1. Use `fanoutqa.load_dev()` or `fanoutqa.load_test()` to load the dataset.
58
+ 2. Run your generations.
59
+ 1. Use `fanoutqa.wiki_search(title)` and `fanoutqa.wiki_content(evidence)` to retrieve the contents of
60
+ Wikipedia pages for the Open Book and Evidence Provided settings.
61
+ 3. Evaluate your generations with `fanoutqa.eval.evaluate(dev_questions, answers)` (see below for the schema).
62
+
63
+ ## Data Format
64
+
65
+ To load the dev or test questions, simply use `fanoutqa.load_dev()` or `fanoutqa.load_test()`. This will return a list
66
+ of `DevQuestion` or `TestQuestion`, as documented below.
67
+
68
+ ### Common Models
69
+
70
+ ```python
71
+ Primitive = bool | int | float | str
72
+
73
+
74
+ class Evidence:
75
+ pageid: int # Wikipedia page ID
76
+ revid: int # Wikipedia revision ID of page as of dataset epoch
77
+ title: str # Title of page
78
+ url: str # Link to page
79
+ ```
80
+
81
+ ### Dev Set
82
+
83
+ The development set is a JSON file containing a list of DevQuestion objects:
84
+
85
+ ```python
86
+ class DevQuestion:
87
+ id: str
88
+ question: str # the top-level question to answer
89
+ decomposition: list[DevSubquestion] # human-written decomposition of the question
90
+ answer: dict[str, Primitive] | list[Primitive] | Primitive
91
+ necessary_evidence: list[Evidence]
92
+ categories: list[str]
93
+
94
+
95
+ class DevSubquestion:
96
+ id: str
97
+ question: str
98
+ decomposition: list[DevSubquestion]
99
+ answer: dict[str, Primitive] | list[Primitive] | Primitive # the answer to this subquestion
100
+ depends_on: list[str] # the IDs of subquestions that this subquestion requires answering first
101
+ evidence: Evidence | None # if this is None, the question will have a decomposition
102
+ ```
103
+
104
+ ### Test Set
105
+
106
+ The test set contains a slightly different format, as the answers are not provided. We include links to all the evidence
107
+ used in the human-written decompositions for our Evidence Provided task.
108
+
109
+ ```python
110
+ class TestQuestion:
111
+ id: str
112
+ question: str
113
+ necessary_evidence: list[Evidence]
114
+ categories: list[str]
115
+ ```
116
+
117
+ ## Wikipedia Retrieval
118
+
119
+ To retrieve the contents of Wikipedia pages used as evidence, this package queries Wikipedia's Revisions API. There
120
+ are two main functions to interface with Wikipedia:
121
+
122
+ - `wiki_search(query)` returns a list of Evidence (Wikipedia pages that best match the query)
123
+ - `wiki_content(evidence)` takes an Evidence and returns its content (as of the dataset epoch) as Markdown.
124
+
125
+ To save on time waiting for requests and computation power (both locally and on Wikipedia's end), this package
126
+ aggressively caches retrieved Wikipedia pages. By default, this cache is located in `~/.cache/fanoutqa/wikicache`.
127
+ We provide many cached pages (~9GB) you can prepopulate this cache with, by using the following commands:
128
+
129
+ ```shell
130
+ mkdir -p ~/.cache/fanoutqa
131
+ wget -O ~/.cache/fanoutqa/wikicache.tar.gz https://datasets.mechanus.zhu.codes/fanoutqa/wikicache.tar.gz
132
+ tar -xzf ~/.cache/fanoutqa/wikicache.tar.gz
133
+ ```
134
+
135
+ ## Evaluation
136
+
137
+ To evaluate a model's generation, first ensure that you have installed all the evaluation dependencies (see above).
138
+
139
+ To use the GPT-as-judge metric, you will need to provide your OpenAI API key. We intentionally do not read
140
+ the `OPENAI_API_KEY` environment variable by default to prevent accidentally spending money; you must set the
141
+ `FANOUTQA_OPENAI_API_KEY` environment variable instead. You can use `export FANOUTQA_OPENAI_API_KEY=$OPENAI_API_KEY` to
142
+ quickly copy it over.
143
+
144
+ You should record your model/system's outputs as a list of dicts with the following schema:
145
+
146
+ ```json
147
+ {
148
+ "id": "The ID of the question (see test set schema) this is a generation for.",
149
+ "answer": "The model's generation."
150
+ }
151
+ ```
152
+
153
+ Finally, to evaluate your generations on the dev set,
154
+ call `fanoutqa.eval.evaluate(dev_questions, answers, llm_cache_key="your-model-key")`. This will run all of the metrics
155
+ and return an `EvaluationScore` object, which has attributes matching the following structure:
156
+
157
+ ```json
158
+ {
159
+ "acc": {
160
+ "loose": 0,
161
+ "strict": 0
162
+ },
163
+ "rouge": {
164
+ "rouge1": {
165
+ "precision": 0,
166
+ "recall": 0,
167
+ "fscore": 0
168
+ },
169
+ "rouge2": {
170
+ "precision": 0,
171
+ "recall": 0,
172
+ "fscore": 0
173
+ },
174
+ "rougeL": {
175
+ "precision": 0,
176
+ "recall": 0,
177
+ "fscore": 0
178
+ }
179
+ },
180
+ "bleurt": 0,
181
+ "gpt": 0
182
+ }
183
+ ```
184
+
185
+ (to access this in this dictionary form, use `dataclasses.asdict()`.)
186
+
187
+ ### Test Set Evaluation
188
+
189
+ To evaluate your model on the hidden test set, first generate answers for each question in the test set.
190
+ Your generations should be in the form of a JSONL file, with each line being a JSON object with the following schema for
191
+ each test question:
192
+
193
+ ```json
194
+ {
195
+ "id": "The ID of the question (see test set schema) this is a generation for.",
196
+ "answer": "The model's generation."
197
+ }
198
+ ```
199
+
200
+ You will also need to write a metadata file for your model. Your metadata file should use this template:
201
+
202
+ ```json
203
+ {
204
+ "name": "The name of your model",
205
+ "authors": "The list of authors, in natural language (e.g. `Andrew Zhu, Alyssa Hwang, Liam Dugan, and Chris Callison-Burch`)",
206
+ "url": "A link to your model's website, if applicable (null otherwise)",
207
+ "citation": "The list of authors and year, in citation format (e.g. `Zhu et al., 2024`)",
208
+ "type": "FOUNDATION | FINETUNE | PROMPT | OTHER",
209
+ "context": "The context length of the model your system uses (as an int)",
210
+ "is_trained_for_function_calling": "Whether your model was trained for function calling specifically (true/false)",
211
+ "details": "Additional model details (e.g. API model revision or Hugging Face model ID) - optional",
212
+ "closedbook_generations": "YOUR-SYSTEM-NAME.jsonl",
213
+ "openbook_generations": "YOUR-SYSTEM-NAME.jsonl",
214
+ "evidenceprovided_generations": "YOUR-SYSTEM-NAME.jsonl"
215
+ }
216
+ ```
217
+
218
+ Then, fork this repository. Add your generation files to
219
+ `leaderboard-submissions/[SETTING]-generations/YOUR-SYSTEM-NAME.jsonl`
220
+ and your metadata file to `leaderboard-submissions/metadata/YOUR-SYSTEM-NAME.json` and make a pull request.
221
+
222
+ If you do not want to release the generations of your model, please email these files to
223
+ [[email protected]](mailto:[email protected]) instead and we will add your model to the leaderboards without
224
+ pushing the generations.
225
+
226
+ Finally, our GitHub bot will automatically run metrics on the submitted generations and commit a metrics file to
227
+ `leaderboard-submissions/results/YOUR-SYSTEM-NAME.jsonl`. If all looks well, a maintainer will merge the PR and your
228
+ model will appear on the leaderboards!
229
+
230
+ ## Additional Resources
231
+
232
+ Although this package queries live Wikipedia and uses the Revisions API to get page content as of the dataset epoch,
233
+ we also provide a snapshot of English Wikipedia as of Nov 20, 2023. You can download this
234
+ snapshot [here](https://datasets.mechanus.zhu.codes/fanoutqa/enwiki-20231120-pages-articles-multistream.xml.bz2) (23G)
235
+ and its
236
+ index [here](https://datasets.mechanus.zhu.codes/fanoutqa/enwiki-20231120-pages-articles-multistream-index.txt.bz2).
237
+
238
+ ## Acknowledgements
239
+
240
+ We would like to thank the members of the lab of Chris Callison-Burch for detailed feedback on the contents of this
241
+ paper and the members of the Northern Lights Province Discord for their participation in our human evaluation. In
242
+ particular, we would like to thank Bryan Li for his thoughtful suggestions with regards to our human evaluation and
243
+ other parts of our paper.
244
+
245
+ This research is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced
246
+ Research Projects Activity (IARPA), via the HIATUS Program contract #2022-22072200005. The views and conclusions
247
+ contained herein are those of the authors and should not be interpreted as necessarily representing the official
248
+ policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to
249
+ reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.