File size: 1,215 Bytes
4601b87
 
21a4bf1
 
 
 
b4b3168
 
 
 
4601b87
73c9f4c
 
 
 
 
 
 
 
 
 
 
 
033c9e1
 
 
 
 
 
4601b87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21a4bf1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
library_name: peft
tags:
- llm
- llama2
- medical
datasets:
- BI55/MedText
pipeline_tag: text-generation
base_model: meta-llama/Llama-2-7b-chat-hf
---

# Llama2 🦙 finetuned on medical diagnosis
MedText dataset: https://huggingface.co/datasets/BI55/MedText

1412 pairs of diagnosis cases


# About:
The primary objective of this fine-tuning process is to equip Llama2 with the ability to assist in diagnosing various medical cases and diseases. 
However, it is essential to clarify that it is not designed to replace real medical professionals. Instead, its purpose is to provide helpful information to users,
suggesting potential next steps based on the input data and the patterns it has learned from the MedText dataset.

Finetuned on guanaco styled instructions

```
###Human
###Assistant
```
## Training procedure


The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions


- PEFT 0.5.0.dev0