|
--- |
|
license: cc-by-4.0 |
|
language: |
|
- en |
|
- es |
|
- fr |
|
- it |
|
tags: |
|
- casimedicos |
|
- explainability |
|
- medical exams |
|
- medical question answering |
|
- multilinguality |
|
- LLMs |
|
- LLM |
|
pretty_name: MedExpQA |
|
configs: |
|
- config_name: en |
|
data_files: |
|
- split: train |
|
path: |
|
- data/en/train.en.casimedicos.rag.jsonl |
|
- split: validation |
|
path: |
|
- data/en/dev.en.casimedicos.rag.jsonl |
|
- split: test |
|
path: |
|
- data/en/test.en.casimedicos.rag.jsonl |
|
- config_name: es |
|
data_files: |
|
- split: train |
|
path: |
|
- data/es/train.es.casimedicos.rag.jsonl |
|
- split: validation |
|
path: |
|
- data/es/dev.es.casimedicos.rag.jsonl |
|
- split: test |
|
path: |
|
- data/es/test.es.casimedicos.rag.jsonl |
|
- config_name: fr |
|
data_files: |
|
- split: train |
|
path: |
|
- data/fr/train.fr.casimedicos.rag.jsonl |
|
- split: validation |
|
path: |
|
- data/fr/dev.fr.casimedicos.rag.jsonl |
|
- split: test |
|
path: |
|
- data/fr/test.fr.casimedicos.rag.jsonl |
|
- config_name: it |
|
data_files: |
|
- split: train |
|
path: |
|
- data/it/train.it.casimedicos.rag.jsonl |
|
- split: validation |
|
path: |
|
- data/it/dev.it.casimedicos.rag.jsonl |
|
- split: test |
|
path: |
|
- data/it/test.it.casimedicos.rag.jsonl |
|
task_categories: |
|
- text-generation |
|
- question-answering |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
|
|
<p align="center"> |
|
<br> |
|
<img src="http://www.ixa.eus/sites/default/files/anitdote.png" style="height: 200px;"> |
|
<br> |
|
|
|
# MexExpQA: Multilingual Benchmarking of Medical QA with reference gold explanations and Retrieval Augmented Generation (RAG) |
|
|
|
We present a new multilingual parallel medical benchmark, MedExpQA, for the evaluation of LLMs on Medical Question Answering. |
|
This benchmark can be used for various NLP tasks including: **Medical Question Answering** or **Explanation Generation**. |
|
|
|
Although the design of MedExpQA is independent of any specific dataset, for the first version of the MedExpQA benchmark we leverage the commented MIR exams |
|
from the [Antidote CasiMedicos dataset which includes gold reference explanations](https://huggingface.co/datasets/HiTZ/casimedicos-exp), which is currently |
|
available for 4 languages: **English, French, Italian and Spanish**. |
|
|
|
<table style="width:33%"> |
|
<tr> |
|
<th>Antidote CasiMedicos splits</th> |
|
<tr> |
|
<td>train</td> |
|
<td>434</td> |
|
</tr> |
|
<tr> |
|
<td>validation</td> |
|
<td>63</td> |
|
</tr> |
|
<tr> |
|
<td>test</td> |
|
<td>125</td> |
|
</tr> |
|
</table> |
|
|
|
- 📖 Paper:[MedExpQA: Multilingual Benchmarking of Large Language Models for Medical Question Answering](https://doi.org/10.1016/j.artmed.2024.102938) |
|
- 💻 Github Repo (Data and Code): [https://github.com/hitz-zentroa/MedExpQA](https://github.com/hitz-zentroa/MedExpQA) |
|
- 🌐 Project Website: [https://univ-cotedazur.eu/antidote](https://univ-cotedazur.eu/antidote) |
|
- Funding: CHIST-ERA XAI 2019 call. Antidote (PCI2020-120717-2) funded by MCIN/AEI /10.13039/501100011033 and by European Union NextGenerationEU/PRTR |
|
|
|
|
|
## Example of Document in Antidote CasiMedicos Dataset |
|
|
|
<p align="center"> |
|
<img src="https://github.com/ixa-ehu/antidote-casimedicos/blob/main/casimedicos-exp.png?raw=true" style="height: 600px;"> |
|
</p> |
|
|
|
In this repository you can find the following data: |
|
|
|
- **casimedicos-raw**: The textual content including Clinical Case (C), Question (Q), Possible Answers (P), and Explanation (E) as shown in the example above. |
|
- **casimedicos-exp**: The manual annotations linking the explanations of the correct and incorrect possible answers. |
|
- **MedExpQA**: benchmark for Medical QA based on gold reference explanations from casimedicos-exp and knowledge automatically extracted using RAG methods. |
|
|
|
## Data Explanation |
|
|
|
The following attributes composed **casimedicos-raw**: |
|
|
|
- **id**: unique doc identifier. |
|
- **year**: year in which the exam was published by the Spanish Ministry of Health. |
|
- **question_id_specific**: id given to the original exam published by the Spanish Ministry of Health. |
|
- **full_question**: Clinical Case (C) and Question (Q) as illustrated in the example document above. |
|
- **full answer**: Full commented explanation (E) as illustrated in the example document above. |
|
- **type**: medical speciality. |
|
- **options**: Possible Answers (P) as illustrated in the example document above. |
|
- **correct option**: solution to the exam question. |
|
|
|
Additionally, the following jsonl attribute was added to create **casimedicos-exp**: |
|
|
|
- **explanations**: for each possible answer above, manual annotation states whether: |
|
1. the explanation for each possible answer exists in the full comment (E) and |
|
2. if present, then we provide character and token offsets plus the text corresponding to the explanation for each possible answer. |
|
|
|
For **MedExpQA** benchmarking we have added the following elements in the data: |
|
|
|
- **rag** |
|
1. **clinical_case_options/MedCorp/RRF-2**: 32 snippets extracted from the MedCorp corpus using the combination of _clinical case_ and _options_ as a |
|
query during the retrieval process. These 32 snippets are the resulting RRF combination of 32 separately retrieved snippets using BM25 and MedCPT. |
|
|
|
|
|
## MedExpQA Benchmark Overview |
|
|
|
<p align="left"> |
|
<img src="https://github.com/hitz-zentroa/MedExpQA/blob/main/out/experiments/figures/overall_system.png?raw=true" style="height: 300px;"> |
|
</p> |
|
|
|
## Prompt Example for LLMs |
|
|
|
<p align="left"> |
|
<img src="https://github.com/hitz-zentroa/MedExpQA/blob/main/out/experiments/figures/prompt_en.png?raw=true" style="height: 250px;"> |
|
</p> |
|
|
|
## Benchmark Results (averaged per type of external knowledge for grounding) |
|
|
|
LLMs evaluated: [LLaMA](https://huggingface.co/meta-llama/Llama-2-13b), [PMC-LLaMA](https://huggingface.co/axiong/PMC_LLaMA_13B), |
|
[Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) and [BioMistral](https://huggingface.co/BioMistral/BioMistral-7B-DARE). |
|
|
|
<p align="left"> |
|
<img src="https://github.com/hitz-zentroa/MedExpQA/blob/main/out/experiments/figures/benchmark.png?raw=true" style="height: 300px;"> |
|
</p> |
|
|
|
|
|
## Citation |
|
|
|
If you use MedExpQA then please **cite the following paper**: |
|
|
|
```bibtex |
|
@article{ALONSO2024102938, |
|
title = {MedExpQA: Multilingual benchmarking of Large Language Models for Medical Question Answering}, |
|
journal = {Artificial Intelligence in Medicine}, |
|
pages = {102938}, |
|
year = {2024}, |
|
issn = {0933-3657}, |
|
doi = {https://doi.org/10.1016/j.artmed.2024.102938}, |
|
url = {https://www.sciencedirect.com/science/article/pii/S0933365724001805}, |
|
author = {Iñigo Alonso and Maite Oronoz and Rodrigo Agerri}, |
|
keywords = {Large Language Models, Medical Question Answering, Multilinguality, Retrieval Augmented Generation, Natural Language Processing}, |
|
} |
|
``` |
|
|
|
**Contact**: [Iñigo Alonso](https://hitz.ehu.eus/en/node/282) and [Rodrigo Agerri](https://ragerri.github.io/) |
|
HiTZ Center - Ixa, University of the Basque Country UPV/EHU |