famma / README.md
iLampard's picture
Upload dataset version 2.0
65a0b92 verified
metadata
language:
  - en
  - zh
  - fr
license: apache-2.0
size_categories:
  - 1K<n<10K
task_categories:
  - question-answering
  - multiple-choice
pretty_name: >-
  FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question
  Answering
tags:
  - finance
dataset_info:
  features:
    - name: idx
      dtype: int32
    - name: question_id
      dtype: string
    - name: context
      dtype: string
    - name: question
      dtype: string
    - name: options
      sequence: string
    - name: image_1
      dtype: image
    - name: image_2
      dtype: image
    - name: image_3
      dtype: image
    - name: image_4
      dtype: image
    - name: image_5
      dtype: image
    - name: image_6
      dtype: image
    - name: image_7
      dtype: image
    - name: image_type
      dtype: string
    - name: answers
      dtype: string
    - name: explanation
      dtype: string
    - name: topic_difficulty
      dtype: string
    - name: question_type
      dtype: string
    - name: subfield
      dtype: string
    - name: language
      dtype: string
    - name: main_question_id
      dtype: string
    - name: sub_question_id
      dtype: string
    - name: is_arithmetic
      dtype: int32
    - name: ans_image_1
      dtype: image
    - name: ans_image_2
      dtype: image
    - name: ans_image_3
      dtype: image
    - name: ans_image_4
      dtype: image
    - name: ans_image_5
      dtype: image
    - name: ans_image_6
      dtype: image
    - name: release
      dtype: string
  splits:
    - name: release_livepro
      num_bytes: 3266580
      num_examples: 103
    - name: release_basic
      num_bytes: 113235537.37
      num_examples: 1945
    - name: release_basic_txt
      num_bytes: 1978313.375
      num_examples: 1945
  download_size: 94674468
  dataset_size: 118480430.745
configs:
  - config_name: default
    data_files:
      - split: release_livepro
        path: data/release_livepro-*
      - split: release_basic
        path: data/release_basic-*
      - split: release_basic_txt
        path: data/release_basic_txt-*

Introduction

FAMMA is a multi-modal financial Q&A benchmark dataset. The questions encompass three heterogeneous image types - tables, charts and text & math screenshots - and span eight subfields in finance, comprehensively covering topics across major asset classes. Additionally, all the questions are categorized by three difficulty levels — easy, medium, and hard - and are available in three languages — English, Chinese, and French. Furthermore, the questions are divided into two types: multiple-choice and open questions.

More importantly, FAMMA provides a "live" benchmark for evaluating financial analysis capabilities of LLMs. The benchmark continuously collects new questions from real-world financial professionals, ensuring up-to-date and contamination-free evaluation.

The leaderboard is regularly updated and can be accessed at https://famma-bench.github.io/famma/.

The project code is available at https://github.com/famma-bench/bench-script.

NEWS

🔥 Latest Updates:

  • [2025/03] Release of release_basic_txt, a purely textual dataset that utilizes OCR to extract multimodal information and convert it into textual context for each question in release_basic.
  • [2025/03] Add is_arithmetic column in the dataset to indicate whether the question involves heavy compuation.
  • [2025/02] Release of release_livepro dataset.
  • [2025/01] Release of release_basic dataset, now including answers and explanations with enhanced quality.
  • [2024/06] Initial public release of FAMMA benchmark (based on the release_basic dataset), along with our paper: FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering.

Live Benchmarking Concept

In addition to the baseline dataset (release_basic that contains 1935 questions), FAMMA provides a live benchmark for evaluating financial analysis capabilities of LLMs. The benchmark continuously collects new questions from real-world financial professionals, ensuring up-to-date and contamination-free evaluation.

The "live" nature of FAMMA means:

  1. Expert-Sourced Questions: New questions are continuously proposed by financial experts, ensuring they have never been made public before and reflect real-world financial analysis scenarios. See contributors.
  2. Contamination Prevention: Questions in the live set (at the moment release_livepro) have non-public answers and explanations.
  3. Time-Based Evaluation: Models can be evaluated on questions from specific time periods.
  4. Domain Coverage: Questions span across different financial topics and complexity levels, curated by domain experts.

Dataset Versions

FAMMA is continuously updated with new questions. We provide different versions of the dataset:

  • release_basic: The release containing 1935 questions, collected from online sources. Apart from the questions, both answers and explanations are provided.
  • release_livepro: The release containing 103 questions, created by invited experts. Only the questions are provided.

Dataset Structure

  • idx: a unique identifier for the index of the question in the dataset.
  • question_id: a unique identifier for the question across the whole dataset: {language}{main_question_id}{sub_question_id}_{release_version}.
  • context: relevant background information related to the question.
  • question: the specific query being asked.
  • options: the specific query being asked.
  • image_1- image_7: directories of images referenced in the context or question.
  • image_type: type of the image, e.g., chart, table, screenshot.
  • answers: a concise and accurate response. (public on release_basic, non-public on the live set release_livepro)
  • explanation: a detailed justification for the answer. (public on release_basic, non-public on the live set release_livepro)
  • topic_difficulty: a measure of the question's complexity based on the level of reasoning required.
  • question_type: categorized as either multiple-choice or open-ended.
  • subfield: the specific area of expertise to which the question belongs, categorized into eight subfields.
  • language: the language in which the question text is written.
  • main_question_id: a unique identifier under the same language subset for the question within its context; questions with the same context share the same ID.
  • sub_question_id: a unique identifier for the question within its corresponding main question.
  • is_arithmetic: whether the question is an arithmetic question that needs heavy calculation.
  • ans_image_1 - ans_image_6: (public on release_basic, non-public on the live set release_livepro)

Download

see the script at https://github.com/famma-bench/bench-script/blob/main/step_1_download_dataset.py

Fristly, clone the repository and install the dependencies:

git clone https://github.com/famma-bench/bench-script.git
cd bench-script
pip install -r requirements.txt

To download the dataset, run the following command:

python step_1_download_dataset.py \
    --hf_dir "weaverbirdllm/famma" \
    --split "release_basic" \ # or "release_livepro" or None to download the whole set
    --save_dir "./hf_data"

Options:

  • --hf_dir: HuggingFace repository name
  • --split: Specific version to download (optional)
  • --save_dir: Local directory to save the dataset (default: "./hf_data")

After downloading, the dataset will be saved in the local directory ./data in json format.

Citation

If you use FAMMA in your research, please cite our paper as follows:

@article{xue2024famma,
  title={FAMMA: A Benchmark for Financial Domain Multilingual Multimodal Question Answering},
  author={Siqiao Xue, Tingting Chen, Fan Zhou, Qingyang Dai, Zhixuan Chu, and Hongyuan Mei},
  journal={arXiv preprint arXiv:2410.04526},
  year={2024},
  url={https://arxiv.org/abs/2410.04526}
}