LoneStriker
commited on
Commit
•
46353d5
1
Parent(s):
1e06c96
Upload folder using huggingface_hub
Browse files- .gitattributes +5 -35
- BioMistral-7B-SLERP-Q3_K_L.gguf +3 -0
- BioMistral-7B-SLERP-Q4_K_M.gguf +3 -0
- BioMistral-7B-SLERP-Q5_K_M.gguf +3 -0
- BioMistral-7B-SLERP-Q6_K.gguf +3 -0
- BioMistral-7B-SLERP-Q8_0.gguf +3 -0
- README.md +148 -0
- mergekit_config.yml +17 -0
.gitattributes
CHANGED
@@ -1,35 +1,5 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
-
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
1 |
+
BioMistral-7B-SLERP-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
2 |
+
BioMistral-7B-SLERP-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
3 |
+
BioMistral-7B-SLERP-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
4 |
+
BioMistral-7B-SLERP-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
5 |
+
BioMistral-7B-SLERP-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
BioMistral-7B-SLERP-Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:143c08547fcbf37d454b836e68843ae4b08a45c7d86c118ef3b3c5266e26df36
|
3 |
+
size 3822024928
|
BioMistral-7B-SLERP-Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:48caeff759a52cd34ad360a54c00ba9e414166a09d80a446c2e00f8230a1dca5
|
3 |
+
size 4368439520
|
BioMistral-7B-SLERP-Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f83e21e63e3e2acb2b5e487508dba1ad24dcac2ddd81461b4eb62bd34ac6b8a9
|
3 |
+
size 5131409632
|
BioMistral-7B-SLERP-Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:10f1943335fd54058221804127d8e8ba3261e4253330a099c9ef995e3e620f5a
|
3 |
+
size 5942065376
|
BioMistral-7B-SLERP-Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4c424ff93157a02edebfa6db4ac73ff889ced479861a18071b1bfd953e960b0a
|
3 |
+
size 7695857888
|
README.md
ADDED
@@ -0,0 +1,148 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- BioMistral/BioMistral-7B
|
4 |
+
- mistralai/Mistral-7B-Instruct-v0.1
|
5 |
+
library_name: transformers
|
6 |
+
tags:
|
7 |
+
- mergekit
|
8 |
+
- merge
|
9 |
+
- slerp
|
10 |
+
- medical
|
11 |
+
- biology
|
12 |
+
license: apache-2.0
|
13 |
+
datasets:
|
14 |
+
- pubmed
|
15 |
+
language:
|
16 |
+
- fr
|
17 |
+
- en
|
18 |
+
- es
|
19 |
+
- it
|
20 |
+
- pl
|
21 |
+
- nl
|
22 |
+
- de
|
23 |
+
pipeline_tag: text-generation
|
24 |
+
---
|
25 |
+
# BioMistral-7B-slerp
|
26 |
+
|
27 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
28 |
+
|
29 |
+
## Merge Details
|
30 |
+
### Merge Method
|
31 |
+
|
32 |
+
This model was merged using the SLERP merge method.
|
33 |
+
|
34 |
+
### Models Merged
|
35 |
+
|
36 |
+
The following models were included in the merge:
|
37 |
+
* [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
|
38 |
+
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
|
39 |
+
|
40 |
+
### Configuration
|
41 |
+
|
42 |
+
The following YAML configuration was used to produce this model:
|
43 |
+
|
44 |
+
```yaml
|
45 |
+
|
46 |
+
slices:
|
47 |
+
- sources:
|
48 |
+
- model: mistralai/Mistral-7B-Instruct-v0.1
|
49 |
+
layer_range: [0, 32]
|
50 |
+
- model: BioMistral/BioMistral-7B
|
51 |
+
layer_range: [0, 32]
|
52 |
+
merge_method: slerp
|
53 |
+
base_model: mistralai/Mistral-7B-Instruct-v0.1
|
54 |
+
parameters:
|
55 |
+
t:
|
56 |
+
- filter: self_attn
|
57 |
+
value: [0, 0.5, 0.3, 0.7, 1]
|
58 |
+
- filter: mlp
|
59 |
+
value: [1, 0.5, 0.7, 0.3, 0]
|
60 |
+
- value: 0.5
|
61 |
+
dtype: bfloat16
|
62 |
+
|
63 |
+
```
|
64 |
+
|
65 |
+
|
66 |
+
<p align="center">
|
67 |
+
<img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/>
|
68 |
+
</p>
|
69 |
+
|
70 |
+
# BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
|
71 |
+
|
72 |
+
**Abstract:**
|
73 |
+
|
74 |
+
Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges.
|
75 |
+
In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released.
|
76 |
+
|
77 |
+
# 1. BioMistral models
|
78 |
+
|
79 |
+
**BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC.
|
80 |
+
|
81 |
+
| Model Name | Base Model | Model Type | Sequence Length | Download |
|
82 |
+
|:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:|
|
83 |
+
| BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
|
84 |
+
| BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) |
|
85 |
+
| BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) |
|
86 |
+
| BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) |
|
87 |
+
|
88 |
+
# 2. Quantized Models
|
89 |
+
|
90 |
+
| Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download |
|
91 |
+
|:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:|
|
92 |
+
| BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
|
93 |
+
| BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) |
|
94 |
+
| BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) |
|
95 |
+
| BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) |
|
96 |
+
| BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) |
|
97 |
+
| BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) |
|
98 |
+
| BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) |
|
99 |
+
| BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) |
|
100 |
+
|
101 |
+
# 2. Using BioMistral
|
102 |
+
|
103 |
+
You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow.
|
104 |
+
|
105 |
+
Loading the model and tokenizer :
|
106 |
+
|
107 |
+
```python
|
108 |
+
from transformers import AutoModel, AutoTokenizer
|
109 |
+
|
110 |
+
tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B")
|
111 |
+
model = AutoModel.from_pretrained("BioMistral/BioMistral-7B")
|
112 |
+
```
|
113 |
+
|
114 |
+
# 3. Supervised Fine-tuning Benchmark
|
115 |
+
|
116 |
+
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. |
|
117 |
+
|-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------|
|
118 |
+
| **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 |
|
119 |
+
| **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 |
|
120 |
+
| | | | | | | | | | | | |
|
121 |
+
| **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 |
|
122 |
+
| **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** |
|
123 |
+
| **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 |
|
124 |
+
| **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> |
|
125 |
+
| | | | | | | | | | | | |
|
126 |
+
| **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 |
|
127 |
+
| **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 |
|
128 |
+
| **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 |
|
129 |
+
| **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 |
|
130 |
+
| | | | | | | | | | | | |
|
131 |
+
| **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 |
|
132 |
+
|
133 |
+
Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT.
|
134 |
+
|
135 |
+
# Citation BibTeX
|
136 |
+
|
137 |
+
Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373)
|
138 |
+
|
139 |
+
```bibtex
|
140 |
+
@misc{labrak2024biomistral,
|
141 |
+
title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains},
|
142 |
+
author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour},
|
143 |
+
year={2024},
|
144 |
+
eprint={2402.10373},
|
145 |
+
archivePrefix={arXiv},
|
146 |
+
primaryClass={cs.CL}
|
147 |
+
}
|
148 |
+
```
|
mergekit_config.yml
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
slices:
|
3 |
+
- sources:
|
4 |
+
- model: mistralai/Mistral-7B-Instruct-v0.1
|
5 |
+
layer_range: [0, 32]
|
6 |
+
- model: Project44/BioMistral-7B-0.1-PubMed-V2
|
7 |
+
layer_range: [0, 32]
|
8 |
+
merge_method: slerp
|
9 |
+
base_model: mistralai/Mistral-7B-Instruct-v0.1
|
10 |
+
parameters:
|
11 |
+
t:
|
12 |
+
- filter: self_attn
|
13 |
+
value: [0, 0.5, 0.3, 0.7, 1]
|
14 |
+
- filter: mlp
|
15 |
+
value: [1, 0.5, 0.7, 0.3, 0]
|
16 |
+
- value: 0.5
|
17 |
+
dtype: bfloat16
|