Update README.md
Browse files
README.md
CHANGED
@@ -98,7 +98,6 @@ configs:
|
|
98 |
|
99 |
More importantly, `FAMMA` provides a "live" benchmark for evaluating financial analysis capabilities of LLMs. The benchmark continuously collects new questions from real-world financial professionals, ensuring up-to-date and contamination-free evaluation.
|
100 |
|
101 |
-
|
102 |
The leaderboard is regularly updated and can be accessed at https://famma-bench.github.io/famma/.
|
103 |
|
104 |
The project code is available at https://github.com/famma-bench/bench-script.
|
@@ -114,32 +113,43 @@ The project code is available at https://github.com/famma-bench/bench-script.
|
|
114 |
- [2024/06] Initial public release of `FAMMA` benchmark (based on the `release_v2406` dataset), along with our paper: [FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering](https://arxiv.org/abs/2410.04526).
|
115 |
|
116 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
117 |
## Dataset Versions
|
118 |
|
119 |
FAMMA is continuously updated with new questions. We provide different versions of the dataset:
|
120 |
|
121 |
-
* `
|
122 |
-
* `
|
123 |
|
124 |
|
125 |
## Dataset Structure
|
126 |
|
127 |
-
- idx
|
128 |
-
- question_id: a unique identifier for the question across the whole dataset: {language}
|
129 |
- context: relevant background information related to the question.
|
130 |
- question: the specific query being asked.
|
131 |
- options: the specific query being asked.
|
132 |
- image_1- image_7: directories of images referenced in the context or question.
|
133 |
- image_type: type of the image, e.g., chart, table, screenshot.
|
134 |
-
- answers: a concise and accurate response. **(public on
|
135 |
-
- explanation
|
136 |
- topic_difficulty: a measure of the question's complexity based on the level of reasoning required.
|
137 |
- question_type: categorized as either multiple-choice or open-ended.
|
138 |
- subfield: the specific area of expertise to which the question belongs, categorized into eight subfields.
|
139 |
-
- language
|
140 |
-
- main_question_id
|
141 |
-
- sub_question_id
|
142 |
-
-
|
|
|
143 |
|
144 |
|
145 |
## Download
|
@@ -160,7 +170,7 @@ To download the dataset, run the following command:
|
|
160 |
```bash
|
161 |
python step_1_download_dataset.py \
|
162 |
--hf_dir "weaverbirdllm/famma" \
|
163 |
-
--split "
|
164 |
--save_dir "./hf_data"
|
165 |
```
|
166 |
|
@@ -170,6 +180,8 @@ Options:
|
|
170 |
- `--save_dir`: Local directory to save the dataset (default: "./hf_data")
|
171 |
|
172 |
|
|
|
|
|
173 |
## Citation
|
174 |
If you use FAMMA in your research, please cite our paper as follows:
|
175 |
|
|
|
98 |
|
99 |
More importantly, `FAMMA` provides a "live" benchmark for evaluating financial analysis capabilities of LLMs. The benchmark continuously collects new questions from real-world financial professionals, ensuring up-to-date and contamination-free evaluation.
|
100 |
|
|
|
101 |
The leaderboard is regularly updated and can be accessed at https://famma-bench.github.io/famma/.
|
102 |
|
103 |
The project code is available at https://github.com/famma-bench/bench-script.
|
|
|
113 |
- [2024/06] Initial public release of `FAMMA` benchmark (based on the `release_v2406` dataset), along with our paper: [FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering](https://arxiv.org/abs/2410.04526).
|
114 |
|
115 |
|
116 |
+
## Live Benchmarking Concept
|
117 |
+
|
118 |
+
In addition to the baseline dataset (`release_basic` that contains 1935 questions), `FAMMA` provides a `live` benchmark for evaluating financial analysis capabilities of LLMs. The benchmark continuously collects new questions from real-world financial professionals, ensuring up-to-date and contamination-free evaluation.
|
119 |
+
|
120 |
+
The "live" nature of FAMMA means:
|
121 |
+
1. **Expert-Sourced Questions**: New questions are continuously proposed by financial experts, ensuring they have never been made public before and reflect real-world financial analysis scenarios. See [contributors](https://github.com/famma-bench/bench-script/blob/main/contributors.md).
|
122 |
+
2. **Contamination Prevention**: Questions in the live set (at the moment `release_livepro`) have non-public answers and explanations.
|
123 |
+
3. **Time-Based Evaluation**: Models can be evaluated on questions from specific time periods.
|
124 |
+
4. **Domain Coverage**: Questions span across different financial topics and complexity levels, curated by domain experts.
|
125 |
+
|
126 |
## Dataset Versions
|
127 |
|
128 |
FAMMA is continuously updated with new questions. We provide different versions of the dataset:
|
129 |
|
130 |
+
* `release_basic`: The release containing 1935 questions, collected from online sources. Apart from the questions, both answers and explanations are provided.
|
131 |
+
* `release_livepro`: The release containing 103 questions, created by invited experts. Only the questions are provided.
|
132 |
|
133 |
|
134 |
## Dataset Structure
|
135 |
|
136 |
+
- idx: a unique identifier for the index of the question in the dataset.
|
137 |
+
- question_id: a unique identifier for the question across the whole dataset: {language}{main_question_id}{sub_question_id}_{release_version}.
|
138 |
- context: relevant background information related to the question.
|
139 |
- question: the specific query being asked.
|
140 |
- options: the specific query being asked.
|
141 |
- image_1- image_7: directories of images referenced in the context or question.
|
142 |
- image_type: type of the image, e.g., chart, table, screenshot.
|
143 |
+
- answers: a concise and accurate response. **(public on `release_basic`, non-public on the live set `release_livepro`)**
|
144 |
+
- explanation: a detailed justification for the answer. **(public on `release_basic`, non-public on the live set `release_livepro`)**
|
145 |
- topic_difficulty: a measure of the question's complexity based on the level of reasoning required.
|
146 |
- question_type: categorized as either multiple-choice or open-ended.
|
147 |
- subfield: the specific area of expertise to which the question belongs, categorized into eight subfields.
|
148 |
+
- language: the language in which the question text is written.
|
149 |
+
- main_question_id: a unique identifier under the same language subset for the question within its context; questions with the same context share the same ID.
|
150 |
+
- sub_question_id: a unique identifier for the question within its corresponding main question.
|
151 |
+
- is_arithmetic: whether the question is an arithmetic question that needs heavy calculation.
|
152 |
+
- ans_image_1 - ans_image_6: **(public on `release_basic`, non-public on the live set `release_livepro`)**
|
153 |
|
154 |
|
155 |
## Download
|
|
|
170 |
```bash
|
171 |
python step_1_download_dataset.py \
|
172 |
--hf_dir "weaverbirdllm/famma" \
|
173 |
+
--split "release_basic" \ # or "release_livepro" or None to download the whole set
|
174 |
--save_dir "./hf_data"
|
175 |
```
|
176 |
|
|
|
180 |
- `--save_dir`: Local directory to save the dataset (default: "./hf_data")
|
181 |
|
182 |
|
183 |
+
After downloading, the dataset will be saved in the local directory `./data` in json format.
|
184 |
+
|
185 |
## Citation
|
186 |
If you use FAMMA in your research, please cite our paper as follows:
|
187 |
|