File size: 5,357 Bytes
73dbcb0
 
 
 
 
 
 
 
 
9c9f976
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
language:
- en
base_model: NousResearch/Llama-2-70b-chat-hf
tags:
- biology
- medical
- text-generation-inference
---
#  LLaMA-2-7B Chat - AI Medical Chatbot

## Model Overview
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on the [AI Medical Chatbot](https://huggingface.co/datasets/ruslanmv/ai-medical-chatbot) dataset, which consists of medical question-answering tasks. It leverages the latest in language model technology for generating accurate and respectful medical assistance responses, providing helpful advice on common medical questions.

Fine-tuned using LoRA (Low-Rank Adaptation) and quantization techniques, this model is optimized for efficiency while maintaining high-quality, contextually relevant outputs. The training was performed using [PEFT](https://github.com/huggingface/peft) to make the model more lightweight and accessible for a broader range of applications.

### Model Details
- **Model Type**: LLaMA-2 7B (Chat)
- **Base Model**: [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf)
- **Fine-tuned on**: [AI Medical Chatbot](https://huggingface.co/datasets/ruslanmv/ai-medical-chatbot)
- **Parameter Count**: 7 Billion
- **Quantization**: Applied for efficient inference
- **Training Methodology**: LoRA + PEFT
- **Task**: Medical Question Answering (QA)

## Model Purpose
This model was fine-tuned to assist users in answering medical questions, particularly common healthcare concerns such as symptoms, treatments, and remedies. It is **not intended** to replace professional medical consultation but can be used to offer informative, general responses that can help guide users toward appropriate care.

## Intended Use
- **Primary Use Case**: Medical question-answering chatbot
- **Intended Users**: Developers or healthcare professionals seeking a chatbot interface for initial user engagement or educational purposes. 
- **Environment**: Can be deployed in environments where lightweight models are necessary (due to quantization and PEFT optimization).
- **Primary Applications**: Healthcare chatbots, medical assistants, online medical question-answering systems.

### Example Inputs
- **User**: "What are the common treatments for a sore throat?"
- **Model**: "For a sore throat, common treatments include rest, hydration, throat lozenges, warm saltwater gargles, and over-the-counter pain relief medications. If the sore throat persists, you should consult a doctor as it may indicate a bacterial infection."

- **User**: "Can you suggest a homeopathic remedy for hair loss?"
- **Model**: "Homeopathic remedies like Arnica, Phosphorus, and Silicea are often recommended for hair loss. However, it's important to consult a homeopathic practitioner for a tailored treatment."

## Training Dataset
- **Dataset**: [AI Medical Chatbot](https://huggingface.co/datasets/ruslanmv/ai-medical-chatbot)
  - This dataset contains a wide variety of medical queries and corresponding answers. The dataset covers questions about symptoms, diagnoses, treatments, and remedies.

## Training Process
The model was trained using the following setup:
- **Optimizer**: AdamW
- **Batch Size**: 2
- **Gradient Accumulation**: 4 steps
- **Learning Rate**: 2e-4
- **Max Steps**: 5000
- **Epochs**: 500 (with early stopping)
- **Quantization**: Applied for memory efficiency
- **LoRA**: Used for parameter-efficient fine-tuning

## Limitations
- **Not a Substitute for Medical Advice**: This model is trained to assist with general medical questions but should **not** be used to make clinical decisions or substitute professional medical advice.
- **Biases**: The model's responses may reflect the biases inherent in the dataset it was trained on.
- **Data Limitation**: The model may not have been exposed to niche or highly specialized medical knowledge and could provide incomplete or incorrect information in such cases.

## Ethical Considerations
This model is designed to assist with medical-related queries and provide useful responses. However, users are strongly encouraged to consult licensed healthcare providers for serious medical conditions, diagnoses, or treatment plans. Misuse of the model for self-diagnosis or treatment is discouraged.

### Warning
The outputs of this model should not be relied upon for critical or life-threatening situations. It is essential to consult a healthcare professional before taking any medical action based on this model's suggestions.

## How to Use

You can load and use this model for medical chatbot applications with ease using the Hugging Face library:

```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline


model_id = "NousResearch/Llama-2-7b-chat-hf"
config = PeftConfig.from_pretrained( 'MassMin/llama2_ai_medical_chatbot')
model = AutoModelForCausalLM.from_pretrained(model_id)
model = PeftModel.from_pretrained(model,  'MassMin/llama2_ai_medical_chatbot')
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    max_length=256
)

prompt='Input your question?.'
result = pipe(f"<s>[INST] {prompt} [/INST]")
print(result[0]['generated_text'])