Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
csv
Languages:
English
Size:
10K - 100K
License:
metadata
license: apache-2.0
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
CARDBiomedBench
Dataset Summary
CARDBiomedBench is a biomedical question-answering benchmark designed to evaluate Large Language Models (LLMs) on complex biomedical tasks. It consists of a curated set of question-answer pairs covering various biomedical domains and reasoning types, challenging models to demonstrate deep understanding and reasoning capabilities in the biomedical field.
Data Fields
question
: string - The biomedical question posed in the dataset.answer
: string - The corresponding answer to the question.bio_category
: string - The biological category or categories associated with the question. Multiple categories are separated by a semicolon (;
).reasoning_category
: string - The reasoning category or categories associated with the question. Multiple categories are separated by a semicolon (;
).uuid
: string - Unique identifier for the question-answer pair.template_uuid
: string - Unique identifier of the original template question from which this instance was derived.
Citation
@article {Bianchi2025.01.15.633272,
author = {Bianchi, Owen and Willey, Maya and Avarado, Chelsea X and Danek, Benjamin and Khani, Marzieh and Kuznetsov, Nicole and Dadu, Anant and Shah, Syed and Koretsky, Mathew J and Makarious, Mary B and Weller, Cory and Levine, Kristin S and Kim, Sungwon and Jarreau, Paige and Vitale, Dan and Marsan, Elise and Iwaki, Hirotaka and Leonard, Hampton and Bandres-Ciga, Sara and Singleton, Andrew B and Nalls, Mike A. and Mokhtari, Shekoufeh and Khashabi, Daniel and Faghri, Faraz},
title = {CARDBiomedBench: A Benchmark for Evaluating Large Language Model Performance in Biomedical Research},
year = {2025},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2025/01/19/2025.01.15.633272},
journal = {bioRxiv}
}