JordiBayarri commited on
Commit
3f2f0bb
verified
1 Parent(s): 884bc48

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -80,9 +80,10 @@ Aloe Beta has been tested on the most popular healthcare QA datasets, with and w
80
 
81
  The Beta model has been developed to excel in several different medical tasks. For this reason, we evaluated the model in many different medical tasks:
82
 
83
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/FyHZXoXCbc7AzXeCwqS9_.png)
84
 
85
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/uS3qddvQ5iwbI0WZGVFDF.png)
 
 
86
 
87
  We also compared the performance of the model in the general domain, using the OpenLLM Leaderboard benchmark. Aloe-Beta gets competitive results with the current SOTA general models in the most used general benchmarks and outperforms the medical models:
88
 
@@ -251,7 +252,7 @@ The training set consists of around 1.8B tokens, having 3 different types of dat
251
  - [HPAI-BSC/MMLU-medical-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/MMLU-medical-cot-llama31)
252
  - [HPAI-BSC/Polymed-QA](https://huggingface.co/datasets/HPAI-BSC/Polymed-QA)
253
  - Genstruct data (coming soon)
254
- - General data. It includes maths, STEM, code, function calling, and instruction of very long instructions.
255
  - [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection)
256
 
257
  #### Training parameters
 
80
 
81
  The Beta model has been developed to excel in several different medical tasks. For this reason, we evaluated the model in many different medical tasks:
82
 
 
83
 
84
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/0cuipdgiVfa1goX3i-ZTB.png)
85
+
86
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/qm-g6qBoMeX6i_zH-aNrm.png)
87
 
88
  We also compared the performance of the model in the general domain, using the OpenLLM Leaderboard benchmark. Aloe-Beta gets competitive results with the current SOTA general models in the most used general benchmarks and outperforms the medical models:
89
 
 
252
  - [HPAI-BSC/MMLU-medical-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/MMLU-medical-cot-llama31)
253
  - [HPAI-BSC/Polymed-QA](https://huggingface.co/datasets/HPAI-BSC/Polymed-QA)
254
  - Genstruct data (coming soon)
255
+ - General data. It includes maths, STEM, code, function calling, and instructions with a very long context.
256
  - [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection)
257
 
258
  #### Training parameters