Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ This repository provides a fine-tuned version of the powerful Llama3 8B model, s
|
|
33 |
This model is accessible through the Hugging Face Transformers library. Install it using pip:
|
34 |
|
35 |
```bash
|
36 |
-
pip install transformers
|
37 |
```
|
38 |
|
39 |
**Usage Example**
|
@@ -41,45 +41,51 @@ pip install transformers
|
|
41 |
Here's a Python code snippet demonstrating how to interact with the `Medical-Llama3-8B-16bit` model and generate answers to your medical questions:
|
42 |
|
43 |
```python
|
44 |
-
from transformers import AutoTokenizer,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
-
# Load tokenizer and model
|
47 |
-
tokenizer = AutoTokenizer.from_pretrained("ruslanmv/Medical-Llama3-8B")
|
48 |
-
model = AutoModelForCausalLM.from_pretrained("ruslanmv/Medical-Llama3-8B").to("cuda") # If using GPU
|
49 |
-
# Function to format and generate response with prompt engineering using a chat template
|
50 |
def askme(question):
|
51 |
sys_message = '''
|
52 |
You are an AI Medical Assistant trained on a vast dataset of health information. Please be thorough and
|
53 |
provide an informative answer. If you don't know the answer to a specific medical inquiry, advise seeking professional help.
|
54 |
-
'''
|
55 |
-
|
56 |
# Create messages structured for the chat template
|
57 |
messages = [{"role": "system", "content": sys_message}, {"role": "user", "content": question}]
|
58 |
-
|
59 |
# Applying chat template
|
60 |
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
61 |
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
|
62 |
-
outputs = model.generate(**inputs, max_new_tokens=100, use_cache=True)
|
63 |
-
|
64 |
-
# Extract and return the generated text
|
65 |
-
|
|
|
66 |
return answer
|
67 |
-
|
68 |
# Example usage
|
69 |
# - Context: First describe your problem.
|
70 |
# - Question: Then make the question.
|
71 |
-
question = '''
|
72 |
-
I'm a 35-year-old male and for the past few months, I've been experiencing fatigue, increased sensitivity to cold, and dry, itchy skin.
|
73 |
|
|
|
|
|
74 |
Could these symptoms be related to hypothyroidism?
|
75 |
-
If so, what steps should I take to get a proper diagnosis and discuss treatment options?
|
76 |
-
|
77 |
print(askme(question))
|
|
|
78 |
```
|
79 |
the type of answer is :
|
80 |
```
|
81 |
-
|
82 |
-
|
|
|
|
|
|
|
83 |
```
|
84 |
**Important Note**
|
85 |
|
|
|
33 |
This model is accessible through the Hugging Face Transformers library. Install it using pip:
|
34 |
|
35 |
```bash
|
36 |
+
pip install transformers bitsandbytes accelerate
|
37 |
```
|
38 |
|
39 |
**Usage Example**
|
|
|
41 |
Here's a Python code snippet demonstrating how to interact with the `Medical-Llama3-8B-16bit` model and generate answers to your medical questions:
|
42 |
|
43 |
```python
|
44 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
|
45 |
+
import torch
|
46 |
+
model_name = "ruslanmv/Medical-Llama3-8B"
|
47 |
+
device_map = 'auto'
|
48 |
+
bnb_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4",bnb_4bit_compute_dtype=torch.float16,)
|
49 |
+
model = AutoModelForCausalLM.from_pretrained( model_name,quantization_config=bnb_config, trust_remote_code=True,use_cache=False,device_map=device_map)
|
50 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
51 |
+
tokenizer.pad_token = tokenizer.eos_token
|
52 |
|
|
|
|
|
|
|
|
|
53 |
def askme(question):
|
54 |
sys_message = '''
|
55 |
You are an AI Medical Assistant trained on a vast dataset of health information. Please be thorough and
|
56 |
provide an informative answer. If you don't know the answer to a specific medical inquiry, advise seeking professional help.
|
57 |
+
'''
|
|
|
58 |
# Create messages structured for the chat template
|
59 |
messages = [{"role": "system", "content": sys_message}, {"role": "user", "content": question}]
|
60 |
+
|
61 |
# Applying chat template
|
62 |
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
63 |
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
|
64 |
+
outputs = model.generate(**inputs, max_new_tokens=100, use_cache=True)
|
65 |
+
|
66 |
+
# Extract and return the generated text, removing the prompt
|
67 |
+
response_text = tokenizer.batch_decode(outputs)[0].strip()
|
68 |
+
answer = response_text.split('<|im_start|>assistant')[-1].strip()
|
69 |
return answer
|
|
|
70 |
# Example usage
|
71 |
# - Context: First describe your problem.
|
72 |
# - Question: Then make the question.
|
|
|
|
|
73 |
|
74 |
+
question = '''I'm a 35-year-old male and for the past few months, I've been experiencing fatigue,
|
75 |
+
increased sensitivity to cold, and dry, itchy skin.
|
76 |
Could these symptoms be related to hypothyroidism?
|
77 |
+
If so, what steps should I take to get a proper diagnosis and discuss treatment options?'''
|
78 |
+
|
79 |
print(askme(question))
|
80 |
+
|
81 |
```
|
82 |
the type of answer is :
|
83 |
```
|
84 |
+
Based on your description, it sounds like you may be experiencing symptoms of hypothyroidism.
|
85 |
+
Hypothyroidism is a condition where the thyroid gland doesn't produce enough hormones, leading to a variety of symptoms.
|
86 |
+
Some common symptoms include fatigue, weight gain, constipation, and dry skin.
|
87 |
+
If you're experiencing any of these symptoms, it's important to see a doctor for a proper diagnosis and treatment plan.
|
88 |
+
Your doctor may order blood tests to check your thyroid hormone levels
|
89 |
```
|
90 |
**Important Note**
|
91 |
|