|
--- |
|
dataset_info: |
|
- config_name: imaginary-reference |
|
features: |
|
- name: role |
|
dtype: string |
|
- name: content |
|
dtype: string |
|
splits: |
|
- name: test |
|
num_bytes: 4485 |
|
num_examples: 25 |
|
download_size: 4391 |
|
dataset_size: 4485 |
|
- config_name: indifferent |
|
features: |
|
- name: role |
|
dtype: string |
|
- name: content |
|
dtype: string |
|
splits: |
|
- name: test |
|
num_bytes: 11732 |
|
num_examples: 25 |
|
download_size: 10536 |
|
dataset_size: 11732 |
|
- config_name: math |
|
features: |
|
- name: role |
|
dtype: string |
|
- name: content |
|
dtype: string |
|
splits: |
|
- name: test |
|
num_bytes: 5440 |
|
num_examples: 25 |
|
download_size: 4740 |
|
dataset_size: 5440 |
|
- config_name: redundant |
|
features: |
|
- name: role |
|
dtype: string |
|
- name: content |
|
dtype: string |
|
splits: |
|
- name: test |
|
num_bytes: 5087 |
|
num_examples: 25 |
|
download_size: 4096 |
|
dataset_size: 5087 |
|
- config_name: unanswerable |
|
features: |
|
- name: role |
|
dtype: string |
|
- name: content |
|
dtype: string |
|
splits: |
|
- name: test |
|
num_bytes: 12501 |
|
num_examples: 50 |
|
download_size: 8242 |
|
dataset_size: 12501 |
|
configs: |
|
- config_name: imaginary-reference |
|
data_files: |
|
- split: test |
|
path: imaginary-reference/test-* |
|
- config_name: indifferent |
|
data_files: |
|
- split: test |
|
path: indifferent/test-* |
|
- config_name: math |
|
data_files: |
|
- split: test |
|
path: math/test-* |
|
- config_name: redundant |
|
data_files: |
|
- split: test |
|
path: redundant/test-* |
|
- config_name: unanswerable |
|
data_files: |
|
- split: test |
|
path: unanswerable/test-* |
|
license: cc-by-nc-4.0 |
|
language: |
|
- en |
|
--- |
|
# DNR Bench |
|
|
|
Don’t Reason Bench (DNR Bench), a novel benchmark designed to expose a vulnerability in current RLMs: their tendency to over-reason by attempting to solve unsolvable |
|
problems, leading to excessively long responses. |
|
|
|
# Data Summary |
|
The DNR Bench dataset contains 150 adversarially crafted prompts divided into five distinct categories: |
|
- Imaginary Reference |
|
- Indifferent |
|
- Math, |
|
- Redundant, |
|
- Unanswerable. |
|
|
|
Each category targets a specific failure mode observed in reasoning-optimized LLMs, such as hallucinating nonexistent references, failing to remain neutral in ambiguous contexts, incorrectly solving flawed math problems, overanalyzing redundant information, or answering questions that lack sufficient data. |
|
|
|
# Leaderboard |
|
This dataset is used to test reasoning LLMs in [DNR Leaderboard on Huggingface](https://huggingface.co/spaces/ServiceNow-AI/Do-not-reason-bench) |
|
|
|
|
|
# Citation |
|
```bibtex |
|
@misc{hashemi2025dnrbenchbenchmarkingoverreasoning, |
|
title={DNR Bench: Benchmarking Over-Reasoning in Reasoning LLMs}, |
|
author={Masoud Hashemi and Oluwanifemi Bamgbose and Sathwik Tejaswi Madhusudhan and Jishnu Sethumadhavan Nair and Aman Tiwari and Vikas Yadav}, |
|
year={2025}, |
|
eprint={2503.15793}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.LG}, |
|
url={https://arxiv.org/abs/2503.15793}, |
|
} |
|
``` |