procbench / README.md
ifujisawa's picture
Update README.md
182e513 verified
metadata
license: cc-by-4.0
configs:
  - config_name: task01
    data_files: dataset/task01.parquet
  - config_name: task02
    data_files: dataset/task02.parquet
  - config_name: task03
    data_files: dataset/task03.parquet
  - config_name: task04
    data_files: dataset/task04.parquet
  - config_name: task05
    data_files: dataset/task05.parquet
  - config_name: task06
    data_files: dataset/task06.parquet
  - config_name: task07
    data_files: dataset/task07.parquet
  - config_name: task08
    data_files: dataset/task08.parquet
  - config_name: task09
    data_files: dataset/task09.parquet
  - config_name: task10
    data_files: dataset/task10.parquet
  - config_name: task11
    data_files: dataset/task11.parquet
  - config_name: task12
    data_files: dataset/task12.parquet
  - config_name: task13
    data_files: dataset/task13.parquet
  - config_name: task14
    data_files: dataset/task14.parquet
  - config_name: task15
    data_files: dataset/task15.parquet
  - config_name: task16
    data_files: dataset/task16.parquet
  - config_name: task17
    data_files: dataset/task17.parquet
  - config_name: task18
    data_files: dataset/task18.parquet
  - config_name: task19
    data_files: dataset/task19.parquet
  - config_name: task20
    data_files: dataset/task20.parquet
  - config_name: task21
    data_files: dataset/task21.parquet
  - config_name: task22
    data_files: dataset/task22.parquet
  - config_name: task23
    data_files: dataset/task23.parquet
task_categories:
  - question-answering
  - text-generation
  - text2text-generation
language:
  - en
size_categories:
  - 10K<n<100K

Dataset Card for ProcBench

Dataset Overview

Dataset Description

ProcBench is a benchmark designed to evaluate the multi-step reasoning abilities of large language models (LLMs). It focuses on instruction followability, requiring models to solve problems by following explicit, step-by-step procedures. The tasks included in this dataset do not require complex implicit knowledge but emphasize strict adherence to provided instructions. The dataset evaluates model performance on tasks that are straightforward for humans but challenging for LLMs as the number of steps increases.

  • Curated by: Araya, AI Alignment Network, AutoRes
  • Language(s) (NLP): English
  • License: CC-BY-4.0

Dataset Sources

Uses

Direct Use

ProcBench is intended for evaluating the instruction-following capability of LLMs in multi-step procedural tasks. It provides an assessment of how well models follow sequential instructions across a variety of task types.

Dataset Structure

Overview

The dataset consists of 23 task types, with a total of 5,520 examples. Tasks involve operations such as string manipulation, list processing, and numeric computation. Each task is paired with explicit instructions, requiring the model to output intermediate states along with the final result.

Difficulty Levels

Tasks are categorized into three difficulty levels:

  • Short: 2-6 steps
  • Medium: 7-16 steps
  • Long: 17-25 steps

The difficulty levels can be obtained by running the preprocessing script preprocess.py provided in the GitHub repository.

Dataset Creation

Curation Rationale

ProcBench was created to evaluate LLMs’ ability to follow instructions in a procedural manner. The goal is to isolate and test instruction-followability without reliance on complex implicit knowledge, offering a unique perspective on procedural reasoning.

Source Data

Data Collection and Processing

Each example is composed of the combination of a template and a question. Each task is associated with a fixed template, which contains the procedure for solving the question. All templates and generators used to create the questions are available in the GitHub repository.

Citation

BibTeX:

@misc{fujisawa2024procbench,
    title={ProcBench: Benchmark for Multi-Step Reasoning and Following Procedure},
    author={Ippei Fujisawa and Sensho Nobe and Hiroki Seto and Rina Onda and Yoshiaki Uchida and Hiroki Ikoma and Pei-Chun Chien and Ryota Kanai},
    year={2024},
    eprint={2410.03117},
    archivePrefix={arXiv},
    primaryClass={cs.AI}
}