Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,154 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
|
10 |
-
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
-
|
15 |
-
|
16 |
-
<!-- Provide a longer summary of what this model is. -->
|
17 |
-
|
18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
-
|
20 |
-
- **Developed by:** [More Information Needed]
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
-
|
28 |
-
### Model Sources [optional]
|
29 |
-
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
-
|
36 |
-
## Uses
|
37 |
-
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
-
### Direct Use
|
41 |
-
|
42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
-
|
52 |
-
### Out-of-Scope Use
|
53 |
-
|
54 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
-
|
58 |
-
## Bias, Risks, and Limitations
|
59 |
-
|
60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
-
|
64 |
-
### Recommendations
|
65 |
-
|
66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
-
|
70 |
-
## How to Get Started with the Model
|
71 |
-
|
72 |
-
Use the code below to get started with the model.
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
-
|
76 |
-
## Training Details
|
77 |
-
|
78 |
-
### Training Data
|
79 |
-
|
80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
|
135 |
-
|
136 |
|
137 |
-
|
138 |
|
139 |
-
|
140 |
|
141 |
-
|
142 |
|
143 |
-
|
144 |
|
145 |
-
|
146 |
|
147 |
-
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
|
153 |
-
|
154 |
|
155 |
-
|
156 |
|
157 |
-
|
|
|
|
|
|
|
|
|
|
|
158 |
|
159 |
-
|
160 |
|
161 |
-
|
162 |
|
163 |
-
|
|
|
|
|
164 |
|
165 |
-
|
166 |
|
167 |
-
|
|
|
|
|
|
|
|
|
|
|
168 |
|
169 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
170 |
|
171 |
-
|
|
|
|
|
172 |
|
173 |
-
|
|
|
|
|
|
|
174 |
|
175 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
176 |
|
177 |
-
[
|
|
|
178 |
|
179 |
-
|
180 |
|
181 |
-
|
182 |
|
183 |
-
## Glossary [optional]
|
184 |
|
185 |
-
|
186 |
|
187 |
-
[
|
188 |
|
189 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
190 |
|
191 |
-
|
192 |
|
193 |
-
|
194 |
|
195 |
-
|
196 |
|
197 |
-
|
|
|
|
|
198 |
|
199 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
license: llama3
|
5 |
+
tags:
|
6 |
+
- m42
|
7 |
+
- health
|
8 |
+
- healthcare
|
9 |
+
- clinical-llm
|
10 |
+
pipeline_tag: text-generation
|
11 |
+
inference: false
|
12 |
+
license_name: llama3
|
13 |
---
|
14 |
+
# **Med42-v2 - Clinical Large Language Models**
|
15 |
+
Med42-v2 is a suite of open-access clinical large language models (LLM) instruct-tuned by M42 to expand access to medical knowledge. Built off LLaMA-3 and comprising either 8 or 70 billion parameters, these generative AI system provide high-quality answers to medical questions.
|
|
|
|
|
|
|
|
|
16 |
|
17 |
## Model Details
|
18 |
|
19 |
+
*Disclaimer: This large language model is not yet ready for clinical use without further testing and validation. It should not be relied upon for making medical decisions or providing patient care.*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
+
Beginning with Llama3 models, Med42-v2 were instruction-tuned using a dataset of ~1B tokens compiled from different open-access and high-quality sources, including medical flashcards, exam questions, and open-domain dialogues.
|
22 |
|
23 |
+
**Model Developers:** M42 Health AI Team
|
24 |
|
25 |
+
**Finetuned from model:** Llama3 - 8B & 70B Instruct
|
26 |
|
27 |
+
**Context length:** 8k tokens
|
28 |
|
29 |
+
**Input:** Text only data
|
30 |
|
31 |
+
**Output:** Model generates text only
|
32 |
|
33 |
+
**Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model's performance.
|
|
|
|
|
|
|
|
|
34 |
|
35 |
+
**License:** Llama 3 Community License Agreement
|
36 |
|
37 |
+
**Research Paper:** *Comming soon*
|
38 |
|
39 |
+
## Intended Use
|
40 |
+
Med42-v2 suite of models are being made available for further testing and assessment as AI assistants to enhance clinical decision-making and enhance access to LLMs for healthcare use. Potential use cases include:
|
41 |
+
- Medical question answering
|
42 |
+
- Patient record summarization
|
43 |
+
- Aiding medical diagnosis
|
44 |
+
- General health Q&A
|
45 |
|
46 |
+
**Run the model**
|
47 |
|
48 |
+
You can use the 🤗 Transformers library `text-generation` pipeline to do inference.
|
49 |
|
50 |
+
```python
|
51 |
+
import transformers
|
52 |
+
import torch
|
53 |
|
54 |
+
model_name_or_path = "m42-health/Llama3-Med42-70B"
|
55 |
|
56 |
+
pipeline = transformers.pipeline(
|
57 |
+
"text-generation",
|
58 |
+
model=model_name_or_path,
|
59 |
+
torch_dtype=torch.bfloat16,
|
60 |
+
device_map="auto",
|
61 |
+
)
|
62 |
|
63 |
+
messages = [
|
64 |
+
{
|
65 |
+
"role": "system",
|
66 |
+
"content": (
|
67 |
+
"You are a helpful, respectful and honest medical assistant. You are a second version of Med42 developed by the AI team at M42, UAE. "
|
68 |
+
"Always answer as helpfully as possible, while being safe. "
|
69 |
+
"Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. "
|
70 |
+
"Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. "
|
71 |
+
"If you don’t know the answer to a question, please don’t share false information."
|
72 |
+
),
|
73 |
+
},
|
74 |
+
{"role": "user", "content": "What are the symptoms of diabetes?"},
|
75 |
+
]
|
76 |
|
77 |
+
prompt = pipeline.tokenizer.apply_chat_template(
|
78 |
+
messages, tokenize=False, add_generation_prompt=False
|
79 |
+
)
|
80 |
|
81 |
+
stop_tokens = [
|
82 |
+
pipeline.tokenizer.eos_token_id,
|
83 |
+
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>"),
|
84 |
+
]
|
85 |
|
86 |
+
outputs = pipeline(
|
87 |
+
prompt,
|
88 |
+
max_new_tokens=512,
|
89 |
+
eos_token_id=stop_tokens,
|
90 |
+
do_sample=True,
|
91 |
+
temperature=0.4,
|
92 |
+
top_k=150,
|
93 |
+
top_p=0.75,
|
94 |
+
)
|
95 |
|
96 |
+
print(outputs[0]["generated_text"][len(prompt) :])
|
97 |
+
```
|
98 |
|
99 |
+
## Hardware and Software
|
100 |
|
101 |
+
The training was conducted on the NVIDIA DGX cluster with H100 GPUs, utilizing PyTorch's Fully Sharded Data Parallel (FSDP) framework.
|
102 |
|
|
|
103 |
|
104 |
+
## Evaluation Results
|
105 |
|
106 |
+
Med42-v2 improves performance on every clinical benchmark compared to our previous version, including MedQA, MedMCQA, USMLE, MMLU clinical topics and MMLU Pro clinical subset. For all evaluations reported so far, we use [EleutherAI's evaluation harness library](https://github.com/EleutherAI/lm-evaluation-harness) and report zero-shot accuracies (except otherwise stated). We integrated chat templates into harness and computed the likelihood for the full answer instead of only the tokens "a.", "b.", "c." or "d.".
|
107 |
|
108 |
+
|Model|MMLU Pro|MMLU|MedMCQA|MedQA|USMLE|
|
109 |
+
|---:|:---:|:---:|:---:|:---:|:---:|
|
110 |
+
|Med42v2-70B|64.97|88.16|73.82|80.68|84.61|
|
111 |
+
|Med42v2-8B|55.15|77.11|61.82|63.71|68.93|
|
112 |
+
|OpenBioLLM|64.24|90.40|73.18|76.90|79.01|
|
113 |
+
|GPT-4.0<sup>†</sup>|-|87.00|69.50|78.90|84.05|
|
114 |
+
|MedGemini*|-|-|-|84.00|-|
|
115 |
+
|Med-PaLM-2(5-shot)*|-|87.77|71.30|79.70|-|
|
116 |
+
|Med42|76.72|76.72|60.90|61.50|71.85|
|
117 |
+
|ClinicalCamel-70B|69.75|69.75|47.00|53.40|54.30|
|
118 |
+
|GPT-3.5<sup>†</sup>|66.63|66.63|50.10|50.80|53.00|
|
119 |
|
120 |
+
**For MedGemini, results are reported for MedQA without self-training and without search. We note that 0-shot performance is not reported for Med-PaLM 2. Further details can be found at [https://github.com/m42health/med42](https://github.com/m42health/med42)*.
|
121 |
|
122 |
+
<sup>†</sup> *Results as reported in the paper [Capabilities of GPT-4 on Medical Challenge Problems](https://www.microsoft.com/en-us/research/uploads/prod/2023/03/GPT-4_medical_benchmarks.pdf)*.
|
123 |
|
124 |
+
### Key performance metrics:
|
125 |
|
126 |
+
- Med42-v2 outperforms GPT-4.0 in all clinically relevant tasks.
|
127 |
+
- Med42-v2 achieves a MedQA zero-shot performance of 80.68, surpassing the prior state-of-the-art among all openly available medical LLMs.
|
128 |
+
- Med42-v2 attains an 84.61% score on the USMLE (self-assessment and sample exam combined), marking the highest score achieved so far.
|
129 |
|
130 |
+
## Limitations & Safe Use
|
131 |
+
- Med42-v2 suite of models are not ready for real clinical use. Extensive human evaluation is undergoing as it is required to ensure safety.
|
132 |
+
- Potential for generating incorrect or harmful information.
|
133 |
+
- Risk of perpetuating biases in training data.
|
134 |
+
|
135 |
+
Use these suite of models responsibly! Do not rely on them for medical usage without rigorous safety testing.
|
136 |
+
|
137 |
+
## Accessing Med42 and Reporting Issues
|
138 |
+
|
139 |
+
Please report any software "bug" or other problems through one of the following means:
|
140 |
+
|
141 |
+
- Reporting issues with the model: [https://github.com/m42health/med42](https://github.com/m42health/med42)
|
142 |
+
- Reporting risky content generated by the model, bugs and/or any security concerns: [https://forms.office.com/r/fPY4Ksecgf](https://forms.office.com/r/fPY4Ksecgf)
|
143 |
+
- M42’s privacy policy available at [https://m42.ae/privacy-policy/](https://m42.ae/privacy-policy/)
|
144 |
+
- Reporting violations of the Acceptable Use Policy or unlicensed uses of Med42: <[email protected]>
|
145 |
+
|
146 |
+
## Citation
|
147 |
+
```
|
148 |
+
@article{christophe2023med42,
|
149 |
+
title={Med42v2},
|
150 |
+
author={Christophe, Cl{\'e}ment and Hayat, Nasir and Kanithi, Praveen and Al-Mahrooqi, Ahmed and Munjal, Prateek and Pimentel, Marco and Raha, Tathagata and Rajan, Ronnie and Khan, Shadab},
|
151 |
+
journal={M42},
|
152 |
+
year={2023}
|
153 |
+
}
|
154 |
+
```
|