Update README.md
Browse files
README.md
CHANGED
@@ -7,11 +7,11 @@ Introducing **JiviMed-8B_v1**: The Cutting-Edge Biomedical Language Model
|
|
7 |
|
8 |
JiviMed-8B stands as a pinnacle in language modeling tailored specifically for the biomedical sector. Developed by Jivi AI , this model incorporates the latest advancements to deliver unparalleled performance across a wide range of biomedical applications.
|
9 |
|
10 |
-
Tailored for Medicine
|
11 |
|
12 |
-
Unmatched Performance
|
13 |
|
14 |
-
Enhanced Training Methodologies
|
15 |
|
16 |
1. Intensive Data Preparation: Over 100,000+ data points have been meticulously curated to ensure the model is well-versed in the nuances of biomedical language.
|
17 |
2. Hyperparameter Tuning: Hyperparameter adjustments are carefully optimized to enhance learning efficiency without encountering catastrophic forgetting, thus maintaining robust performance across tasks.
|
|
|
7 |
|
8 |
JiviMed-8B stands as a pinnacle in language modeling tailored specifically for the biomedical sector. Developed by Jivi AI , this model incorporates the latest advancements to deliver unparalleled performance across a wide range of biomedical applications.
|
9 |
|
10 |
+
*Tailored for Medicine*: JiviMed-8B is meticulously designed to cater to the specialized language and knowledge requirements of the medical and life sciences industries. It has been fine-tuned using an extensive collection of high-quality biomedical data, enhancing its ability to accurately comprehend and generate domain-specific text.
|
11 |
|
12 |
+
*Unmatched Performance*: With 8 billion parameters, JiviMed-8B outperforms other open-source biomedical language models of similar size. It demonstrates superior results over larger models, both proprietary and open-source, such as GPT-3.5, Meditron-70B, and Gemini 1.0, in various biomedical benchmarks.
|
13 |
|
14 |
+
*Enhanced Training Methodologies*: JiviMed-8B builds upon the robust frameworks of the Meta-Llama-3-8B models, integrating a specially curated diverse medical dataset along with ORPO fine-tuning strategy. Key elements of our training process include:
|
15 |
|
16 |
1. Intensive Data Preparation: Over 100,000+ data points have been meticulously curated to ensure the model is well-versed in the nuances of biomedical language.
|
17 |
2. Hyperparameter Tuning: Hyperparameter adjustments are carefully optimized to enhance learning efficiency without encountering catastrophic forgetting, thus maintaining robust performance across tasks.
|