File size: 8,847 Bytes
9cf73ac
1be043f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f88b7c0
1be043f
 
 
 
 
 
 
 
 
 
f88b7c0
1be043f
 
 
 
 
 
 
 
 
 
f88b7c0
1be043f
 
 
 
 
 
 
 
 
 
f88b7c0
1be043f
 
 
 
 
 
 
 
 
 
 
9cf73ac
1be043f
 
 
 
 
 
0e79d5f
1be043f
 
a89ed98
1be043f
a89ed98
1be043f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e7e7666
1be043f
 
 
 
 
fa092d5
1be043f
 
10f939d
1be043f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0bc19af
aefc9ad
1be043f
498ded5
 
 
f7cd8f6
 
498ded5
 
 
5447702
f7cd8f6
 
5447702
1be043f
498ded5
 
4760398
 
 
f7cd8f6
 
498ded5
 
 
1be043f
 
8bb7ffd
1be043f
 
8bb7ffd
 
 
 
 
 
 
 
 
 
 
1be043f
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
---
license: cc-by-4.0
language:
- en
- es
- fr
- it
tags:
- casimedicos
- explainability
- medical exams
- medical question answering
- multilinguality
- LLMs
- LLM
pretty_name: MedExpQA
configs:
- config_name: en
  data_files:
  - split: train
    path:
    - data/en/train.en.casimedicos.rag.jsonl
  - split: validation
    path:
    - data/en/dev.en.casimedicos.rag.jsonl
  - split: test
    path:
    - data/en/test.en.casimedicos.rag.jsonl
- config_name: es
  data_files:
  - split: train
    path:
    - data/es/train.es.casimedicos.rag.jsonl
  - split: validation
    path:
    - data/es/dev.es.casimedicos.rag.jsonl
  - split: test
    path:
    - data/es/test.es.casimedicos.rag.jsonl
- config_name: fr
  data_files:
  - split: train
    path:
    - data/fr/train.fr.casimedicos.rag.jsonl
  - split: validation
    path:
    - data/fr/dev.fr.casimedicos.rag.jsonl
  - split: test
    path:
    - data/fr/test.fr.casimedicos.rag.jsonl
- config_name: it
  data_files:
  - split: train
    path:
    - data/it/train.it.casimedicos.rag.jsonl
  - split: validation
    path:
    - data/it/dev.it.casimedicos.rag.jsonl
  - split: test
    path:
    - data/it/test.it.casimedicos.rag.jsonl
task_categories:
- text-generation
- question-answering
size_categories:
- 1K<n<10K
---

<p align="center">
    <br>
    <img src="http://www.ixa.eus/sites/default/files/anitdote.png" style="height: 200px;">
    <br>

# MexExpQA: Multilingual Benchmarking of Medical QA with reference gold explanations and Retrieval Augmented Generation (RAG)

We present a new multilingual parallel medical benchmark, MedExpQA, for the evaluation of LLMs on Medical Question Answering.
This benchmark can be used for various NLP tasks including: **Medical Question Answering** or **Explanation Generation**.

Although the design of MedExpQA is independent of any specific dataset, for the first version of the MedExpQA benchmark we leverage the commented MIR exams 
from the [Antidote CasiMedicos dataset which includes gold reference explanations](https://huggingface.co/datasets/HiTZ/casimedicos-exp), which is currently 
available for 4 languages: **English, French, Italian and Spanish**.

<table style="width:33%">
    <tr>
         <th>Antidote CasiMedicos splits</th>
     <tr>
         <td>train</td>
         <td>434</td>
     </tr>
     <tr>
         <td>validation</td>
         <td>63</td>
     </tr>
     <tr>
         <td>test</td>
         <td>125</td>
     </tr>
 </table>

- 📖 Paper:[MedExpQA: Multilingual Benchmarking of Large Language Models for Medical Question Answering](https://doi.org/10.1016/j.artmed.2024.102938)
- 💻 Github Repo (Data and Code): [https://github.com/hitz-zentroa/MedExpQA](https://github.com/hitz-zentroa/MedExpQA)
- 🌐 Project Website: [https://univ-cotedazur.eu/antidote](https://univ-cotedazur.eu/antidote)
- Funding: CHIST-ERA XAI 2019 call. Antidote (PCI2020-120717-2) funded by MCIN/AEI /10.13039/501100011033 and by European Union NextGenerationEU/PRTR


## Example of Document in Antidote CasiMedicos Dataset

<p align="center">
<img src="https://github.com/ixa-ehu/antidote-casimedicos/blob/main/casimedicos-exp.png?raw=true" style="height: 600px;">
</p>

In this repository you can find the following data:

- **casimedicos-raw**: The textual content including Clinical Case (C), Question (Q), Possible Answers (P), and Explanation (E) as shown in the example above.
- **casimedicos-exp**: The manual annotations linking the explanations of the correct and incorrect possible answers.
- **MedExpQA**: benchmark for Medical QA based on gold reference explanations from casimedicos-exp and knowledge automatically extracted using RAG methods.

## Data Explanation

The following attributes composed **casimedicos-raw**:

- **id**: unique doc identifier.
- **year**: year in which the exam was published by the Spanish Ministry of Health.
- **question_id_specific**: id given to the original exam published by the Spanish Ministry of Health.
- **full_question**: Clinical Case (C) and Question (Q) as illustrated in the example document above.
- **full answer**: Full commented explanation (E) as illustrated in the example document above.
- **type**: medical speciality.
- **options**: Possible Answers (P) as illustrated in the example document above.
- **correct option**: solution to the exam question.

Additionally, the following jsonl attribute was added to create **casimedicos-exp**:

- **explanations**: for each possible answer above, manual annotation states whether:
  1. the explanation for each possible answer exists in the full comment (E) and
  2. if present, then we provide character and token offsets plus the text corresponding to the explanation for each possible answer.

For **MedExpQA** benchmarking we have added the following elements in the data:

- **rag**
  1. **clinical_case_options/MedCorp/RRF-2**: 32 snippets extracted from the MedCorp corpus using the combination of _clinical case_ and _options_ as a
  query during the retrieval process. These 32 snippets are the resulting RRF combination of 32 separately retrieved snippets using BM25 and MedCPT.


## MedExpQA Benchmark Overview

<p align="left">
<img src="https://github.com/hitz-zentroa/MedExpQA/blob/main/out/experiments/figures/overall_system.png?raw=true" style="height: 300px;">
</p>

## Prompt Example for LLMs

<p align="left">
<img src="https://github.com/hitz-zentroa/MedExpQA/blob/main/out/experiments/figures/prompt_en.png?raw=true" style="height: 250px;">
</p>

## Benchmark Results (averaged per type of external knowledge for grounding)

LLMs evaluated:  [LLaMA](https://huggingface.co/meta-llama/Llama-2-13b), [PMC-LLaMA](https://huggingface.co/axiong/PMC_LLaMA_13B), 
[Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) and [BioMistral](https://huggingface.co/BioMistral/BioMistral-7B-DARE).

<p align="left">
<img src="https://github.com/hitz-zentroa/MedExpQA/blob/main/out/experiments/figures/benchmark.png?raw=true" style="height: 300px;">
</p>


## Citation

If you use MedExpQA then please **cite the following paper**:

```bibtex
@article{ALONSO2024102938,
title = {MedExpQA: Multilingual benchmarking of Large Language Models for Medical Question Answering},
journal = {Artificial Intelligence in Medicine},
pages = {102938},
year = {2024},
issn = {0933-3657},
doi = {https://doi.org/10.1016/j.artmed.2024.102938},
url = {https://www.sciencedirect.com/science/article/pii/S0933365724001805},
author = {Iñigo Alonso and Maite Oronoz and Rodrigo Agerri},
keywords = {Large Language Models, Medical Question Answering, Multilinguality, Retrieval Augmented Generation, Natural Language Processing},
abstract = {Large Language Models (LLMs) have the potential of facilitating the development of Artificial Intelligence technology to assist medical experts for interactive decision support. This potential has been illustrated by the state-of-the-art performance obtained by LLMs in Medical Question Answering, with striking results such as passing marks in licensing medical exams. However, while impressive, the required quality bar for medical applications remains far from being achieved. Currently, LLMs remain challenged by outdated knowledge and by their tendency to generate hallucinated content. Furthermore, most benchmarks to assess medical knowledge lack reference gold explanations which means that it is not possible to evaluate the reasoning of LLMs predictions. Finally, the situation is particularly grim if we consider benchmarking LLMs for languages other than English which remains, as far as we know, a totally neglected topic. In order to address these shortcomings, in this paper we present MedExpQA, the first multilingual benchmark based on medical exams to evaluate LLMs in Medical Question Answering. To the best of our knowledge, MedExpQA includes for the first time reference gold explanations, written by medical doctors, of the correct and incorrect options in the exams. Comprehensive multilingual experimentation using both the gold reference explanations and Retrieval Augmented Generation (RAG) approaches show that performance of LLMs, with best results around 75 accuracy for English, still has large room for improvement, especially for languages other than English, for which accuracy drops 10 points. Therefore, despite using state-of-the-art RAG methods, our results also demonstrate the difficulty of obtaining and integrating readily available medical knowledge that may positively impact results on downstream evaluations for Medical Question Answering. Data, code, and fine-tuned models will be made publicly available.11https://huggingface.co/datasets/HiTZ/MedExpQA.}
}
```

**Contact**: [Iñigo Alonso](https://hitz.ehu.eus/en/node/282) and [Rodrigo Agerri](https://ragerri.github.io/)
HiTZ Center - Ixa, University of the Basque Country UPV/EHU