๐ฉบ Medical Diagnosis AI Model - Powered by Mistral-7B & LoRA ๐ ๐น Model Overview: Base Model: Mistral-7B (7.7 billion parameters) Fine-Tuning Method: LoRA (Low-Rank Adaptation) Quantization: bnb_4bit (reduces memory footprint while retaining performance) ๐น Parameter Details: Original Mistral-7B Parameters: 7.7 billion LoRA Fine-Tuned Parameters: 4.48% of total model parameters (340 million) Final Merged Model Size (bnb_4bit Quantized): ~4.5GB
This can help you in making a AI agent for healthcare, if you need to finetune it for JSON function/tool calling format you can use some medical function calling dataset to again fine fine tine on it.