mav23 commited on
Commit
aed39a3
1 Parent(s): b806104

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +251 -0
  3. llama3-aloe-8b-alpha.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ llama3-aloe-8b-alpha.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,251 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ datasets:
4
+ - argilla/dpo-mix-7k
5
+ - nvidia/HelpSteer
6
+ - jondurbin/airoboros-3.2
7
+ - hkust-nlp/deita-10k-v0
8
+ - LDJnr/Capybara
9
+ - HPAI-BSC/CareQA
10
+ - GBaker/MedQA-USMLE-4-options
11
+ - lukaemon/mmlu
12
+ - bigbio/pubmed_qa
13
+ - openlifescienceai/medmcqa
14
+ - bigbio/med_qa
15
+ - HPAI-BSC/better-safe-than-sorry
16
+ - HPAI-BSC/pubmedqa-cot
17
+ - HPAI-BSC/medmcqa-cot
18
+ - HPAI-BSC/medqa-cot
19
+ language:
20
+ - en
21
+ library_name: transformers
22
+ tags:
23
+ - biology
24
+ - medical
25
+ pipeline_tag: question-answering
26
+ ---
27
+
28
+ AVAILABLE NOW THE LATEST ITERATION OF THE ALOE FAMILY! [ALOE BETA 8B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B) AND [ALOE BETA 70B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-70B) VERSIONS. These include:
29
+ * Better overall performance
30
+ * More thorough alignment and safety
31
+ * License compatible with more uses
32
+
33
+
34
+ # Aloe: A New Family of Healthcare LLMs
35
+
36
+ Aloe is a new family of healthcare LLMs that is highly competitive with all previous open models of its range and reaches state-of-the-art results at its size by using model merging and advanced prompting strategies. Aloe scores high in metrics measuring ethics and factuality, thanks to a combined red teaming and alignment effort. Complete training details, model merging configurations, and all training data (including synthetically generated data) will be shared. Additionally, the prompting repository used in this work to produce state-of-the-art results during inference will also be shared. Aloe comes with a healthcare-specific risk assessment to contribute to the safe use and deployment of such systems.
37
+
38
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/xlssx5_3_kLQlJlmE-aya.png" width="95%">
39
+
40
+ ## Model Details
41
+
42
+ ### [](https://huggingface.co/templates/model-card-example#model-description)Model Description
43
+
44
+ - **Developed by:** [HPAI](https://hpai.bsc.es/)
45
+ - **Model type:** Causal decoder-only transformer language model
46
+ - **Language(s) (NLP):** English (mainly)
47
+ - **License:** This model is based on Meta Llama 3 8B and is governed by the [Meta Llama 3 License](https://llama.meta.com/llama3/license/). All our modifications are available with a [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
48
+ - **Finetuned from model :** [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
49
+
50
+ ### [](https://huggingface.co/templates/model-card-example#model-sources-optional)Model Sources [optional]
51
+
52
+ - **Repository:** https://github.com/HPAI-BSC/prompt_engine (more coming soon)
53
+ - **Paper:** https://arxiv.org/abs/2405.01886 (more coming soon)
54
+
55
+ ## Model Performance
56
+
57
+ Aloe has been tested on the most popular healthcare QA datasets, with and without medprompting inference technique. Results show competitive performance, even against bigger models.
58
+
59
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62f7a16192950415b637e201/rQ4z-qXzKN44oAcFDbHi2.png" width="95%">
60
+
61
+ Results using advanced prompting methods (aka Medprompt) are achieved through a [repo](https://github.com/HPAI-BSC/prompt_engine) made public with this work.
62
+
63
+ ## Uses
64
+
65
+ ### Direct Use
66
+
67
+ We encourage the use of Aloe for research purposes, as a stepping stone to build better foundational models for healthcare.
68
+
69
+ ### Out-of-Scope Use
70
+
71
+ These models are not to be used for clinical practice, medical diagnosis, or any other form of direct or indirect healthcare advice. Models are prone to error and can produce toxic content. The use of Aloe models for activities harmful for individuals, such as spam, fraud, or impersonation, is prohibited.
72
+
73
+ ## Bias, Risks, and Limitations
74
+
75
+ We consider three risk cases:
76
+ - Healthcare professional impersonation, a fraudulent behaviour which currently generates billions of dollars in [profit](https://www.justice.gov/opa/pr/justice-department-charges-dozens-12-billion-health-care-fraud). A model such as Aloe could be used to increase the efficacy of such deceiving activities, making them more widespread. The main preventive actions are public literacy on the unreliability of digitised information and the importance of medical registration, and legislation enforcing AI-generated content disclaimers.
77
+ - Medical decision-making without professional supervision. While this is already an issue in modern societies (eg self-medication) a model such as Aloe, capable of producing high-quality conversational data, can facilitate self-delusion, particularly in the presence of sycophancy. By producing tailored responses, it can also be used to generate actionable answers. Public literacy on the dangers of self-diagnosis is one of the main defences, together with the introduction of disclaimers and warnings on the models' outputs.
78
+ - Access to information on dangerous substances or procedures. While the literature on sensitive content can already be found on different sources (eg libraries, internet, dark web), LLMs can centralize such access, making it nearly impossible to control the flow of such information. Model alignment can help in that regard, but so far the effects remain insufficient, as jailbreaking methods still overcome it.
79
+
80
+ Table below shows the performance of Aloe at several AI safety tasks:
81
+
82
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/T6Jblpf1kmTkM04K716rM.png" width="95%">
83
+
84
+ ### Recommendations
85
+
86
+ We avoid the use of all personal data in our training. Model safety cannot be guaranteed. Aloe can produce toxic content under the appropriate prompts. For these reasons, minors should not be left alone to interact with Aloe without supervision.
87
+
88
+ ## How to Get Started with the Model
89
+
90
+ Use the code below to get started with the model. You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
91
+
92
+ #### Transformers pipeline
93
+
94
+ ```python
95
+ import transformers
96
+ import torch
97
+
98
+ model_id = "HPAI-BSC/Llama3-Aloe-8B-Alpha"
99
+
100
+ pipeline = transformers.pipeline(
101
+ "text-generation",
102
+ model=model_id,
103
+ model_kwargs={"torch_dtype": torch.bfloat16},
104
+ device_map="auto",
105
+ )
106
+
107
+ messages = [
108
+ {"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."},
109
+ {"role": "user", "content": "Hello."},
110
+ ]
111
+
112
+ prompt = pipeline.tokenizer.apply_chat_template(
113
+ messages,
114
+ tokenize=False,
115
+ add_generation_prompt=True
116
+ )
117
+
118
+ terminators = [
119
+ pipeline.tokenizer.eos_token_id,
120
+ pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
121
+ ]
122
+
123
+ outputs = pipeline(
124
+ prompt,
125
+ max_new_tokens=256,
126
+ eos_token_id=terminators,
127
+ do_sample=True,
128
+ temperature=0.6,
129
+ top_p=0.9,
130
+ )
131
+ print(outputs[0]["generated_text"][len(prompt):])
132
+ ```
133
+
134
+ #### Transformers AutoModelForCausalLM
135
+
136
+ ```python
137
+ from transformers import AutoTokenizer, AutoModelForCausalLM
138
+ import torch
139
+
140
+ model_id = "HPAI-BSC/Llama3-Aloe-8B-Alpha"
141
+
142
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
143
+ model = AutoModelForCausalLM.from_pretrained(
144
+ model_id,
145
+ torch_dtype=torch.bfloat16,
146
+ device_map="auto",
147
+ )
148
+
149
+ messages = [
150
+ {"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."},
151
+ {"role": "user", "content": "Hello"},
152
+ ]
153
+
154
+ input_ids = tokenizer.apply_chat_template(
155
+ messages,
156
+ add_generation_prompt=True,
157
+ return_tensors="pt"
158
+ ).to(model.device)
159
+
160
+ terminators = [
161
+ tokenizer.eos_token_id,
162
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
163
+ ]
164
+
165
+ outputs = model.generate(
166
+ input_ids,
167
+ max_new_tokens=256,
168
+ eos_token_id=terminators,
169
+ do_sample=True,
170
+ temperature=0.6,
171
+ top_p=0.9,
172
+ )
173
+ response = outputs[0][input_ids.shape[-1]:]
174
+ print(tokenizer.decode(response, skip_special_tokens=True))
175
+ ```
176
+
177
+ ## Training Details
178
+
179
+ Supervised fine-tuning on top of Llama 3 8B using medical and general domain datasets, model merging using DARE-TIES process, two-stage DPO process for human preference alignment. More details coming soon.
180
+
181
+ ### Training Data
182
+
183
+ - Medical domain datasets, including synthetic data generated using Mixtral-8x7B and Genstruct
184
+ - HPAI-BSC/pubmedqa-cot
185
+ - HPAI-BSC/medqa-cot
186
+ - HPAI-BSC/medmcqa-cot
187
+ - LDJnr/Capybara
188
+ - hkust-nlp/deita-10k-v0
189
+ - jondurbin/airoboros-3.2
190
+ - argilla/dpo-mix-7k
191
+ - nvidia/HelpSteer
192
+ - Custom preference data with adversarial prompts generated from Anthropic Harmless, Chen et al., and original prompts
193
+
194
+ ## Evaluation
195
+
196
+ ### Testing Data, Factors & Metrics
197
+
198
+ #### Testing Data
199
+
200
+ - [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa)
201
+ - [MedMCQA](https://huggingface.co/datasets/medmcqa)
202
+ - [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa)
203
+ - [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu)
204
+ - [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
205
+ - [CareQA](https://huggingface.co/datasets/HPAI-BSC/CareQA)
206
+
207
+ #### Metrics
208
+
209
+ - Accuracy: suite the evaluation of multiple-choice question-answering tasks.
210
+
211
+ ### Results
212
+
213
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/STlPSggXr9P9JeWAvmAsi.png" width="90%">
214
+
215
+ #### Summary
216
+
217
+ To compare Aloe with the most competitive open models (both general purpose and healthcare-specific) we use popular healthcare datasets (PubMedQA, MedMCQA, MedQA and MMLU for six medical tasks only), together with the new and highly reliable CareQA. We produce the standard MultiMedQA score for reference, by computing the weighted average accuracy on all scores except CareQA. Additionally, we calculate the arithmetic mean across all datasets. The Medical MMLU is calculated by averaging the six medical subtasks: Anatomy, Clinical knowledge, College Biology, College medicine, Medical genetics, and Professional medicine.
218
+
219
+ Benchmark results indicate the training conducted on Aloe has boosted its performance above Llama3-8B-Instruct. Llama3-Aloe-8B-Alpha outperforms larger models like Meditron 70B, and is close to larger base models, like Yi-34. For the former, this gain is consistent even when using SC-CoT, using their best-reported variant. All these results make Llama3-Aloe-8B-Alpha the best healthcare LLM of its size.
220
+
221
+ With the help of prompting techniques the performance of Llama3-Aloe-8B-Alpha is significantly improved. Medprompting in particular provides a 7% increase in reported accuracy, after which Llama3-Aloe-8B-Alpha only lags behind the ten times bigger Llama-3-70B-Instruct. This improvement is mostly consistent across medical fields. Llama3-Aloe-8B-Alpha with medprompting beats the performance of Meditron 70B with their self reported 20 shot SC-CoT in MMLU med and is slightly worse in the other benchmarks.
222
+
223
+ ## Environmental Impact
224
+
225
+ - **Hardware Type:** 4xH100
226
+ - **Hours used:** 7,000
227
+ - **Hardware Provider:** Barcelona Supercomputing Center
228
+ - **Compute Region:** Spain
229
+ - **Carbon Emitted:** 439.25kg
230
+
231
+ ## Model Card Authors
232
+ [Ashwin Kumar Gururajan](https://huggingface.co/G-AshwinKumar)
233
+
234
+ ## Model Card Contact
235
+
236
237
+
238
+ ## Citations
239
+
240
+ If you use this repository in a published work, please cite the following papers as source:
241
+
242
+ ```
243
+ @misc{gururajan2024aloe,
244
+ title={Aloe: A Family of Fine-tuned Open Healthcare LLMs},
245
+ author={Ashwin Kumar Gururajan and Enrique Lopez-Cuena and Jordi Bayarri-Planas and Adrian Tormos and Daniel Hinjos and Pablo Bernabeu-Perez and Anna Arias-Duart and Pablo Agustin Martin-Torres and Lucia Urcelay-Ganzabal and Marta Gonzalez-Mallo and Sergio Alvarez-Napagao and Eduard Ayguadé-Parra and Ulises Cortés Dario Garcia-Gasulla},
246
+ year={2024},
247
+ eprint={2405.01886},
248
+ archivePrefix={arXiv},
249
+ primaryClass={cs.CL}
250
+ }
251
+ ```
llama3-aloe-8b-alpha.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:284782d50378328437c26e6a4634a81e71e166940ee398e37ab5c58242fcf3dc
3
+ size 4661212288