Update README.md
Browse files
README.md
CHANGED
@@ -48,6 +48,12 @@ This approach enables the model to scale significantly by utilizing a sparse ope
|
|
48 |
The training utilized the BiMed1.3M dataset, focusing on bilingual medical interactions in both English and Arabic, with a substantial corpus of over 632 million healthcare-specialized tokens.
|
49 |
The model's fine-tuning process includes a low-rank adaptation technique (QLoRA) to efficiently adapt the model to specific tasks while keeping computational demands manageable.
|
50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
## Dataset
|
52 |
|
53 |
(Details about the BiMed1.3M dataset, including composition and access.)
|
|
|
48 |
The training utilized the BiMed1.3M dataset, focusing on bilingual medical interactions in both English and Arabic, with a substantial corpus of over 632 million healthcare-specialized tokens.
|
49 |
The model's fine-tuning process includes a low-rank adaptation technique (QLoRA) to efficiently adapt the model to specific tasks while keeping computational demands manageable.
|
50 |
|
51 |
+
| Model Name | Download |
|
52 |
+
|--------------|----------|
|
53 |
+
| BiMediX-Eng | [HuggingFace Link](https://huggingface.co/BiMediX/BiMediX-Eng) |
|
54 |
+
| BiMediX-Ara | [HuggingFace Link](https://huggingface.co/BiMediX/BiMediX-Ara) |
|
55 |
+
| BiMediX-Bi | [HuggingFace Link](https://huggingface.co/BiMediX/BiMediX-Bi) |
|
56 |
+
|
57 |
## Dataset
|
58 |
|
59 |
(Details about the BiMed1.3M dataset, including composition and access.)
|