G-AshwinKumar commited on
Commit
82d89e6
1 Parent(s): dd4ea0a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -34,9 +34,9 @@ Aloe: A Family of Fine-tuned Open Healthcare LLMs
34
 
35
  ---
36
 
37
- Llama3.1-Aloe-8B-Beta is an **open healthcare LLM** (released with a permissive CC-BY license) achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in two model sizes: [8B](https://huggingface.co/HPAI-BSC/Llama31-Aloe-Beta-8B) and [70B](https://huggingface.co/HPAI-BSC/Llama31-Aloe-Beta-70B). Both models are trained using the same recipe. All necessary resources and details are made available below.
38
 
39
- Aloe is trained in 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 8B version gets close to the performance of closed models like MedPalm-2, GPT4 and Medprompt. With the same RAG system, Aloe-Beta-70B outperforms those private alternatives, producing state-of-the-art results.
40
 
41
  # Aloe-8B-Beta
42
 
@@ -357,7 +357,7 @@ With the help of prompting techniques the performance of Llama3-Aloe-8B-Beta is
357
 
358
 
359
  ## Authors
360
- Aloe Beta has been developed by the [High Performance Artificial Intelligence](https://hpai.bsc.es/) research group, from the [Barcelona Supercomping Center - BSC](https://www.bsc.es/). Main authors are [Jordi Bayarri Planas](https://huggingface.co/JordiBayarri), Ashwin Kumar Gururajan and [Dario Garcia-Gasulla](https://huggingface.co/dariog). Red teaming efforts lead by Adrian Tormos.
361
 
362
  mailto:[email protected]
363
 
 
34
 
35
  ---
36
 
37
+ Llama3.1-Aloe-8B-Beta is an **open healthcare LLM** (released with a permissive CC-BY license) achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in two model sizes: [8B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B) and [70B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-70B). Both models are trained using the same recipe.
38
 
39
+ Aloe is trained on 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 8B version gets close to the performance of closed models like MedPalm-2, GPT4. With the same RAG system, Aloe-Beta-70B outperforms those private alternatives, producing state-of-the-art results.
40
 
41
  # Aloe-8B-Beta
42
 
 
357
 
358
 
359
  ## Authors
360
+ Aloe Beta has been developed by the [High Performance Artificial Intelligence](https://hpai.bsc.es/) research group, from the [Barcelona Supercomping Center - BSC](https://www.bsc.es/). Main authors are [Jordi Bayarri Planas](https://huggingface.co/JordiBayarri), [Ashwin Kumar Gururajan](https://huggingface.co/G-AshwinKumar) and [Dario Garcia-Gasulla](https://huggingface.co/dariog). Red teaming efforts lead by Adrian Tormos.
361
 
362
  mailto:[email protected]
363