Update README.md
Browse files
README.md
CHANGED
@@ -60,6 +60,36 @@ Hence, this dataset is designed to support research and development in biomedica
|
|
60 |
3. Assessing retrieval capabilities of various RAG (Retrieval-Augmented Generation) frameworks
|
61 |
4. Supporting research in biomedical ontologies and knowledge graphs
|
62 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
63 |
|
64 |
## Source Data
|
65 |
|
|
|
60 |
3. Assessing retrieval capabilities of various RAG (Retrieval-Augmented Generation) frameworks
|
61 |
4. Supporting research in biomedical ontologies and knowledge graphs
|
62 |
|
63 |
+
# BiomixQA Dataset
|
64 |
+
|
65 |
+
[Previous sections remain unchanged]
|
66 |
+
|
67 |
+
## Performance Analysis
|
68 |
+
|
69 |
+
We conducted a comprehensive analysis of the performance of three Large Language Models (LLMs) - Llama-2-13b, GPT-3.5-Turbo (0613), and GPT-4 - on the BiomixQA dataset. We compared their performance using both a standard prompt-based approach and our novel Knowledge Graph Retrieval-Augmented Generation (KG-RAG) framework.
|
70 |
+
|
71 |
+
### Performance Summary
|
72 |
+
|
73 |
+
Table 1: Performance (accuracy) of LLMs on BiomixQA datasets using prompt-based (zero-shot) and KG-RAG approaches (For more details refer [this](https://arxiv.org/abs/2311.17330) paper)
|
74 |
+
|
75 |
+
| Model | True/False Dataset | | MCQ Dataset | |
|
76 |
+
|-------|-------------------:|---:|------------:|---:|
|
77 |
+
| | Prompt-based | KG-RAG | Prompt-based | KG-RAG |
|
78 |
+
| Llama-2-13b | 0.89 ± 0.02 | 0.94 ± 0.01 | 0.31 ± 0.03 | 0.53 ± 0.03 |
|
79 |
+
| GPT-3.5-Turbo (0613) | 0.87 ± 0.02 | 0.95 ± 0.01 | 0.63 ± 0.03 | 0.79 ± 0.02 |
|
80 |
+
| GPT-4 | 0.90 ± 0.02 | 0.95 ± 0.01 | 0.68 ± 0.03 | 0.74 ± 0.03 |
|
81 |
+
|
82 |
+
### Key Observations
|
83 |
+
|
84 |
+
1. **Consistent Performance Enhancement**: We observed a consistent performance enhancement for all LLM models when using the KG-RAG framework on both True/False and MCQ datasets.
|
85 |
+
|
86 |
+
2. **Significant Improvement for Llama-2**: The KG-RAG framework significantly elevated the performance of Llama-2-13b, particularly on the more challenging MCQ dataset. We observed an impressive 71% increase in accuracy, from 0.31 ± 0.03 to 0.53 ± 0.03.
|
87 |
+
|
88 |
+
3. **GPT-4 vs GPT-3.5-Turbo on MCQ**: Intriguingly, we observed a small but statistically significant drop in the performance of the GPT-4 model (0.74 ± 0.03) compared to the GPT-3.5-Turbo model (0.79 ± 0.02) on the MCQ dataset when using the KG-RAG framework. This difference was not observed in the prompt-based approach.
|
89 |
+
- Statistical significance: T-test, p-value < 0.0001, t-statistic = -47.7, N = 1000
|
90 |
+
|
91 |
+
4. **True/False Dataset Performance**: All models showed high performance on the True/False dataset, with the KG-RAG approach yielding slightly better results across all models.
|
92 |
+
|
93 |
|
94 |
## Source Data
|
95 |
|