File size: 7,898 Bytes
898c4b4
 
 
 
 
56ccddf
898c4b4
 
56ccddf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fcdfb26
56ccddf
fcdfb26
56ccddf
fcdfb26
56ccddf
fcdfb26
56ccddf
fcdfb26
56ccddf
fcdfb26
56ccddf
fcdfb26
56ccddf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6c74eb7
 
56ccddf
fcdfb26
56ccddf
fcdfb26
56ccddf
fcdfb26
56ccddf
fcdfb26
56ccddf
fcdfb26
56ccddf
fcdfb26
56ccddf
 
 
555f19f
 
821147a
555f19f
f97cf28
9185836
6791960
65a0b92
6791960
65a0b92
 
56ccddf
 
 
555f19f
 
 
 
6791960
 
898c4b4
2f75099
d9320b0
 
898c4b4
cf59798
898c4b4
fc7e8ae
 
cf59798
898c4b4
cf59798
898c4b4
 
 
cf59798
 
 
63c606e
b3fb6c5
cbff5b4
 
 
cf59798
 
c8d23d2
 
 
 
 
 
 
 
 
 
cf59798
 
 
 
c8d23d2
 
cf59798
 
2f75099
d9320b0
c8d23d2
 
2f75099
 
 
 
 
c8d23d2
 
2f75099
 
 
c8d23d2
 
 
 
 
f1576c1
2f75099
f1576c1
2f75099
f1576c1
2f75099
 
cfadb08
 
 
eb69390
cfadb08
 
 
 
 
 
 
 
 
c8d23d2
cfadb08
 
 
 
 
 
 
 
 
c8d23d2
 
2f75099
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
---
language:
- en
- zh
- fr
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- question-answering
- multiple-choice
pretty_name: 'FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question
  Answering'
tags:
- finance
dataset_info:
  features:
  - name: idx
    dtype: int32
  - name: question_id
    dtype: string
  - name: context
    dtype: string
  - name: question
    dtype: string
  - name: options
    sequence: string
  - name: image_1
    dtype: image
  - name: image_2
    dtype: image
  - name: image_3
    dtype: image
  - name: image_4
    dtype: image
  - name: image_5
    dtype: image
  - name: image_6
    dtype: image
  - name: image_7
    dtype: image
  - name: image_type
    dtype: string
  - name: answers
    dtype: string
  - name: explanation
    dtype: string
  - name: topic_difficulty
    dtype: string
  - name: question_type
    dtype: string
  - name: subfield
    dtype: string
  - name: language
    dtype: string
  - name: main_question_id
    dtype: string
  - name: sub_question_id
    dtype: string
  - name: is_arithmetic
    dtype: int32
  - name: ans_image_1
    dtype: image
  - name: ans_image_2
    dtype: image
  - name: ans_image_3
    dtype: image
  - name: ans_image_4
    dtype: image
  - name: ans_image_5
    dtype: image
  - name: ans_image_6
    dtype: image
  - name: release
    dtype: string
  splits:
  - name: release_livepro
    num_bytes: 3266580.0
    num_examples: 103
  - name: release_basic
    num_bytes: 113235537.37
    num_examples: 1945
  - name: release_basic_txt
    num_bytes: 1978313.375
    num_examples: 1945
  download_size: 94674468
  dataset_size: 118480430.745
configs:
- config_name: default
  data_files:
  - split: release_livepro
    path: data/release_livepro-*
  - split: release_basic
    path: data/release_basic-*
  - split: release_basic_txt
    path: data/release_basic_txt-*
---
## Introduction 



`FAMMA` is a multi-modal financial Q&A benchmark dataset. The questions encompass three heterogeneous image types - tables, charts and text & math screenshots - and span eight subfields in finance, comprehensively covering topics across major asset classes. Additionally, all the questions are categorized by three difficulty levels — easy, medium, and hard - and are available in three languages — English, Chinese, and French. Furthermore, the questions are divided into two types: multiple-choice and open questions.

More importantly, `FAMMA` provides a "live" benchmark for evaluating financial analysis capabilities of LLMs. The benchmark continuously collects new questions from real-world financial professionals, ensuring up-to-date and contamination-free evaluation. 

The leaderboard is regularly updated and can be accessed at https://famma-bench.github.io/famma/.  

The project code is available at https://github.com/famma-bench/bench-script.



## NEWS

🔥 **Latest Updates**:
- [2025/03]  Release of `release_basic_txt`, a purely textual dataset that utilizes OCR to extract multimodal information and convert it into textual context for each question in `release_basic`.
- [2025/03]  Add `is_arithmetic` column in the dataset to indicate whether the question involves heavy compuation. 
- [2025/02]  Release of `release_livepro` dataset.
- [2025/01]  Release of `release_basic` dataset, now including answers and explanations with enhanced quality.
- [2024/06]  Initial public release of `FAMMA` benchmark (based on the `release_basic` dataset), along with our paper: [FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering](https://arxiv.org/abs/2410.04526).


## Live Benchmarking Concept

In addition to the baseline dataset (`release_basic` that contains 1935 questions), `FAMMA` provides a `live` benchmark for evaluating financial analysis capabilities of LLMs. The benchmark continuously collects new questions from real-world financial professionals, ensuring up-to-date and contamination-free evaluation. 

The "live" nature of FAMMA means:
1. **Expert-Sourced Questions**: New questions are continuously proposed by financial experts, ensuring they have never been made public before and reflect real-world financial analysis scenarios. See [contributors](https://github.com/famma-bench/bench-script/blob/main/contributors.md).
2. **Contamination Prevention**: Questions in the live set (at the moment `release_livepro`) have non-public answers and explanations.
3. **Time-Based Evaluation**: Models can be evaluated on questions from specific time periods.
4. **Domain Coverage**: Questions span across different financial topics and complexity levels, curated by domain experts.

## Dataset Versions

FAMMA is continuously updated with new questions. We provide different versions of the dataset:

* `release_basic`: The release containing 1935 questions, collected from online sources. Apart from the questions, both answers and explanations are provided.
* `release_livepro`: The release containing 103 questions, created by invited experts. Only the questions are provided.
  

##  Dataset Structure 

- idx: a unique identifier for the index of the question in the dataset.
- question_id: a unique identifier for the question across the whole dataset: {language}{main_question_id}{sub_question_id}_{release_version}.
- context: relevant background information related to the question.
- question: the specific query being asked.
- options: the specific query being asked.
- image_1- image_7: directories of images referenced in the context or question.
- image_type: type of the image, e.g., chart, table, screenshot.
- answers: a concise and accurate response. **(public on `release_basic`, non-public on the live set `release_livepro`)**
- explanation: a detailed justification for the answer. **(public on `release_basic`, non-public on the live set `release_livepro`)**
- topic_difficulty: a measure of the question's complexity based on the level of reasoning required.
- question_type: categorized as either multiple-choice or open-ended.
- subfield: the specific area of expertise to which the question belongs, categorized into eight subfields.
- language: the language in which the question text is written.
- main_question_id: a unique identifier under the same language subset for the question within its context; questions with the same context share the same ID.
- sub_question_id: a unique identifier for the question within its corresponding main question.
- is_arithmetic: whether the question is an arithmetic question that needs heavy calculation.
- ans_image_1 - ans_image_6: **(public on `release_basic`, non-public on the live set `release_livepro`)**
 

## Download 

see the script at https://github.com/famma-bench/bench-script/blob/main/step_1_download_dataset.py


Fristly, clone the repository and install the dependencies:
```bash
git clone https://github.com/famma-bench/bench-script.git
cd bench-script
pip install -r requirements.txt
```

To download the dataset, run the following command:


```bash
python step_1_download_dataset.py \
    --hf_dir "weaverbirdllm/famma" \
    --split "release_basic" \ # or "release_livepro" or None to download the whole set
    --save_dir "./hf_data"
```

Options:
- `--hf_dir`: HuggingFace repository name
- `--split`: Specific version to download (optional)
- `--save_dir`: Local directory to save the dataset (default: "./hf_data")


After downloading, the dataset will be saved in the local directory `./data` in json format. 

## Citation 
If you use FAMMA in your research, please cite our paper as follows:

```latex
@article{xue2024famma,
  title={FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering},
  author={Siqiao Xue, Tingting Chen, Fan Zhou, Qingyang Dai, Zhixuan Chu, and Hongyuan Mei},
  journal={arXiv preprint arXiv:2410.04526},
  year={2024},
  url={https://arxiv.org/abs/2410.04526}
}

```