Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,224 Bytes
ea42118
 
a85ee8a
 
 
 
 
 
 
 
 
 
 
 
 
ea42118
 
 
 
 
 
 
 
 
 
 
 
 
17372a0
 
 
 
 
 
ea42118
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c062c66
ea42118
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
license: apache-2.0
task_categories:
- question-answering
- multiple-choice
language:
- en
tags:
- biology
- medical
formats:
- csv
pretty_name: biomixQA
size_categories:
- n<1K
---
# BiomixQA Dataset

## Overview

BiomixQA is a curated biomedical question-answering dataset comprising two distinct components:
1. Multiple Choice Questions (MCQ)
2. True/False Questions

This dataset has been utilized to validate the Knowledge Graph based Retrieval-Augmented Generation (KG-RAG) framework across different Large Language Models (LLMs). The diverse nature of questions in this dataset, spanning multiple choice and true/false formats, along with its coverage of various biomedical concepts, makes it particularly suitable for assessing the performance of KG-RAG framework. 

Hence, this dataset is designed to support research and development in biomedical natural language processing, knowledge graph reasoning, and question-answering systems.

## Dataset Description

- **Repository:** https://github.com/BaranziniLab/KG_RAG
- **Paper:** [Biomedical knowledge graph-optimized prompt generation for large language models](https://arxiv.org/abs/2311.17330)
- **Point of Contact:** [Karthik Soman](mailto:[email protected])

## Dataset Components

### 1. Multiple Choice Questions (MCQ)

- **File**: `mcq_biomix.csv`
- **Size**: 306 questions
- **Format**: Each question has five choices with a single correct answer

### 2. True/False Questions

- **File**: `true_false_biomix.csv`
- **Size**: 311 questions
- **Format**: Binary (True/False) questions

## Potential Uses

1. Evaluating biomedical question-answering systems
2. Testing natural language processing models in the biomedical domain
3. Assessing retrieval capabilities of various RAG (Retrieval-Augmented Generation) frameworks
4. Supporting research in biomedical ontologies and knowledge graphs


## Source Data

1. SPOKE: A large scale biomedical knowledge graph that consists of ~40 million biomedical concepts and ~140 million biologically meaningful relationships (Morris et al.
2023).
2. DisGeNET: Consolidates data about genes and genetic variants linked to human diseases from curated repositories, the GWAS catalog, animal models, and scientific literature (Piñero et
al. 2016).
3. MONDO: Provides information about the ontological classification of Disease entities in the Open Biomedical Ontologies (OBO) format (Vasilevsky et al. 2022).
4. SemMedDB: Contains semantic predications extracted from PubMed citations (Kilicoglu et al. 2012).
5. Monarch Initiative: A platform for disease-gene association data (Mungall et al. 2017).
6. ROBOKOP: A knowledge graph-based system for biomedical data integration and analysis (Bizon et al. 2019).

## Citation

If you use this dataset in your research, please cite the following paper:
```
@article{soman2023biomedical,
  title={Biomedical knowledge graph-enhanced prompt generation for large language models},
  author={Soman, Karthik and Rose, Peter W and Morris, John H and Akbas, Rabia E and Smith, Brett and Peetoom, Braian and Villouta-Reyes, Catalina and Cerono, Gabriel and Shi, Yongmei and Rizk-Jackson, Angela and others},
  journal={arXiv preprint arXiv:2311.17330},
  year={2023}
}
```