File size: 8,240 Bytes
8d5ae86 a6fa2f1 f4acedf 8d5ae86 e706855 8d5ae86 44553f0 8d5ae86 e706855 0314f11 e706855 44553f0 8d5ae86 e706855 8d5ae86 c670b54 cf41d7a bc3a848 cf41d7a c670b54 e706855 8d5ae86 e706855 1f012dc a6fa2f1 8d5ae86 e706855 a6fa2f1 435b136 a6fa2f1 8d5ae86 a6fa2f1 8d5ae86 435b136 b7c1cf4 8d5ae86 b7c1cf4 1f012dc b7c1cf4 1f012dc b7c1cf4 435b136 b7c1cf4 a6fa2f1 b7c1cf4 a6fa2f1 b7c1cf4 8d5ae86 b7c1cf4 8d5ae86 435b136 1f012dc 435b136 b7c1cf4 435b136 a6fa2f1 4341679 e706855 8d5ae86 b7c1cf4 8d5ae86 b7c1cf4 8d5ae86 e706855 a6fa2f1 8d5ae86 e706855 8d5ae86 e706855 8d5ae86 e706855 8d5ae86 e706855 8d5ae86 e706855 8d5ae86 e706855 8d5ae86 e706855 a6fa2f1 8d5ae86 e706855 8d5ae86 e706855 8d5ae86 e706855 8d5ae86 e706855 8d5ae86 e706855 8d5ae86 e706855 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 |
---
base_model: microsoft/phi-3.5-mini-instruct
library_name: peft
license: mit
datasets:
- syubraj/medical-chat-phi-3.5-instruct-1k
language:
- en
pipeline_tag: text-generation
github-repo: MedicalChatbot-Phi3.5-mini
---
# π₯ Phi-3.5 Mini Instruct Medical Chat (LoRA Adapter)
The **MedicalChat-Phi-3.5-mini-instruct-** fine-tuned model is designed to simulate doctor-patient conversations, offering medical consultations and suggestions based on patient queries. However, its accuracy may be limited in real-world scenarios, as the training dataset was relatively small.
## π Model Overview
- **π§ββοΈ Developed by:** [syubraj](https://huggingface.co/syubraj)
- **π Model Type:** Causal Language Model (CausalLM)
- **π£οΈ Language:** English
- **π License:** MIT License
- **π οΈ Fine-tuned From:** [microsoft/phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)
---
# </>π¦ GitHub Repo
[yubraaj11/MedicalChatbot-Phi3.5-mini](https://github.com/yubraaj11/MedicalChatbot-Phi3.5-mini)
---
## π How to Get Started
### Install Required Packages
```bash
!pip install transformers peft accelerate torch bitsandbytes
```
### Load and Use the Model
Since inference requires more memory, the model will be loaded and inference will be performed on the CPU.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
# Clear CUDA memory before using the model
torch.cuda.empty_cache()
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "syubraj/MedicalChat-Phi-3.5-mini-instruct"
try:
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code = True)
tokenizer = AutoTokenizer.from_pretrained(model_name)
print("Model and Tokenizer loaded successfully.")
except Exception as e:
raise ValueError(f"Error loading Model and Tokenizer: {e}")
def generate_response(user_query: str, system_message: str = None, max_length: int = 1024) -> str:
"""
Generates a response based on the given user query.
:param user_query: The user's input message.
:param system_message: Custom system instruction (optional, defaults to medical assistant).
:param max_length: Max tokens to generate.
:return: Generated assistant response.
"""
if not user_query.strip():
return "Error: User query cannot be empty."
if system_message is None:
system_message = ("You are a trusted AI-powered medical assistant. "
"Analyze patient queries carefully and provide accurate, professional, and empathetic responses. "
"Prioritize patient safety, adhere to medical best practices, and recommend consulting a healthcare provider when necessary.")
messages = [
{"role": "system", "content": system_message},
{'role': "user", "content": user_query}
]
pipe = pipeline("text-generation",
model=model,
tokenizer=tokenizer)
generation_args = {
"max_new_tokens": max_length,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
return(output[0]['generated_text'])
if __name__ == "__main__":
user_input = "Hi Doctor, I have headache."
response = generate_response(user_input)
print("Assistant Response:", response)
```
---
### π Example Output
Assistant Response: Hello! I understand that you're looking to potentially increase your height, and I'll do my best to provide you with helpful information.
Firstly, it's important to clarify that after puberty, the growth plates in our bones, known as epiphyseal plates, have typically fused, meaning our height generally stabilizes. For most adults, especially those who have reached their late teens or early twenties, further height increase is unlikely.
However, there are a few factors that can influence your height:
1. **Genetics**: Your genes play a significant role in determining your height. If your parents are tall, you might be taller too.
2. **Nutrition**: Proper nutrition is essential for growth. Ensure you're getting a balanced diet with enough protein, calcium, and vitamins.
3. **Posture**: Sometimes, poor posture can make you appear shorter. Practicing good posture might help you look taller.
4. **Sleep**: Adequate sleep is crucial for growth hormones to work effectively.
5. **Exercise**: Regular exercise can help maintain a healthy posture and spine alignment, which might help you appear taller.
Here are some tips that might help you look taller:
- **Stand Straight**: Keep your shoulders back and your back straight. This can make you appear taller.
- **Practice Good Posture**: Be mindful of your posture throughout the day.
- **Exercise**: Engage in exercises that strengthen your core, back, and leg muscles.
- **Shoe Lifts**: Some people find that wearing shoe lifts can make them appear taller. However, this is more of a visual effect and won't actually increase your height.
- **Physical Therapy**: If you have concerns about your posture or spinal alignment, a physical therapist can provide exercises and treatments to help.
Remember, while these tips might help you look taller, they won't change your actual height. If you're concerned about your height for medical reasons, such as a growth hormone deficiency, it's best to consult with a healthcare provider.
I hope this information is helpful! If you have any more questions or need further assistance, feel free to ask.
---
## π Training Details
| Step | Training Loss | Validation Loss |
|------|--------------|----------------|
| 50 | 15.751700 | 1.201298 |
| 100 | 4.741700 | 1.170077 |
| 150 | 4.734700 | 1.158106 |
| 200 | 4.521200 | 1.146289 |
| 250 | 4.638000 | 1.135715 |
| 300 | 3.985500 | 1.138858 |
| 350 | 4.079600 | 1.135848 |
| 400 | 4.299300 | 1.136855 |
| 450 | 4.298400 | 1.127821 |
| 500 | 3.974000 | 1.125197 |
| 550 | 3.747800 | 1.158925 |
| 600 | 3.638500 | 1.157699 |
| 650 | 3.661100 | 1.162100 |
| 700 | 3.460500 | 1.160023 |
| 750 | 3.640400 | 1.159106 |
---
## π Training Data
A total of **1,000 entries** were used for training, while **200 entries** were reserved for testing.
π Dataset: [syubraj/medical-chat-phi-3.5-instruct-1k](https://huggingface.co/datasets/syubraj/medical-chat-phi-3.5-instruct-1k)
π Preprocessed Dataset: [syubraj/ruslanmv_medicalChat_phi3.5_instruct](https://huggingface.co/datasets/syubraj/ruslanmv_medicalChat_phi3.5_instruct)
---
## π οΈ Training Hyperparameters
| Parameter | Value |
|--------------------------------|---------------------------|
| `output_dir` | `./results` |
| `per_device_train_batch_size` | `1` |
| `per_device_eval_batch_size` | `1` |
| `gradient_accumulation_steps` | `4` |
| `logging_steps` | `50` |
| `num_train_epochs` | `3` |
| `save_steps` | `50` |
| `save_total_limit` | `2` |
| `eval_strategy` | `"steps"` |
| `eval_steps` | `50` |
| `learning_rate` | `2e-4` |
| `weight_decay` | `0.01` |
| `lr_scheduler_type` | `"cosine"` |
| `warmup_steps` | `25` |
| `fp16` | `True` (Mixed Precision) |
| `push_to_hub` | `False` |
| `report_to` | `"wandb"` (WandB Logging) |
| `run_name` | `"phi3.5-finetune-nf4"` |
---
## π» Hardware Used
π₯οΈ **P100 - Kaggle Free GPU**
π **Framework Versions:**
- PEFT 0.14.0
---
π― **Thank you for using this model! Feel free to reach out for any improvements.** π
|