|
--- |
|
license: mit |
|
datasets: |
|
- keivalya/MedQuad-MedicalQnADataset |
|
language: |
|
- en |
|
library_name: adapter-transformers |
|
metrics: |
|
- accuracy |
|
- bertscore |
|
- bleu |
|
pipeline_tag: text-generation |
|
tags: |
|
- medical |
|
--- |
|
|
|
# K23 MiniMed ๋ชจ๋ธ ์นด๋ |
|
|
|
K23 MiniMed๋ Krew x Huggingface 2023 ํด์ปคํค์์ ์ํ์ ๋ฉํ ์ ์ง๋ํ์ ๊ฐ๋ฐ๋ Mistral 7b Beta Medical Fine Tune ๋ชจ๋ธ์
๋๋ค. |
|
|
|
## ๋ชจ๋ธ ์ธ๋ถ์ฌํญ |
|
|
|
- **๊ฐ๋ฐ์:** [Tonic](https://huggingface.co/Tonic) |
|
|
|
- **ํ์:** [Tonic](https://huggingface.co/Tonic) |
|
- **๊ณต์ ์:** K23-Krew-Hackathon |
|
- **๋ชจ๋ธ ์ ํ:** Mistral 7B-Beta Medical Fine Tune |
|
- **์ธ์ด (NLP):** ์์ด |
|
- **๋ผ์ด์ผ์ค:** MIT |
|
- **Fine-tuning ๊ธฐ๋ฐ ๋ชจ๋ธ:** [Zephyr 7B-Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) |
|
|
|
### ๋ชจ๋ธ ์ถ์ฒ |
|
- **์ ์ฅ์:** [github](https://github.com/Josephrp/AI-challenge-hackathon/blob/master/mistral7b-beta_finetune.ipynb) |
|
- **๋ฐ๋ชจ:** [pseudolab/K23MiniMed](https://huggingface.co/spaces/pseudolab/K23MiniMed) |
|
## ์ฌ์ฉ๋ฒ |
|
์ด ๋ชจ๋ธ์ ๊ต์ก ๋ชฉ์ ์ผ๋ก๋ง ์ํ ์ง๋ฌธ ๋ต๋ณ์ ์ํ ๋ํํ ์ ํ๋ฆฌ์ผ์ด์
์ฉ์
๋๋ค. |
|
### ์ง์ ์ฌ์ฉ |
|
|
|
Gradio ์ฑ๋ด ์ฑ์ ๋ง๋ค์ด ์ํ์ ์ง๋ฌธ์ ํ๊ณ ๋ํ์์ผ๋ก ๋ต๋ณ์ ๋ฐ์ต๋๋ค. |
|
|
|
### ํ๋ฅ ์ฌ์ฉ |
|
|
|
์ด ๋ชจ๋ธ์ ๊ต์ก์ฉ์ผ๋ก๋ง ์ฌ์ฉ๋ฉ๋๋ค. ์ถ๊ฐ์ ์ธ Fine-tuning๊ณผ ์ฌ์ฉ ์์๋ก๋ ๊ณต์ค ๋ณด๊ฑด & ์์, ๊ฐ์ธ ๋ณด๊ฑด & ์์, ์ํ Q & A๊ฐ ์์ต๋๋ค. |
|
|
|
### ์ถ์ฒ์ฌํญ |
|
|
|
์ฌ์ฉ ์ ์ ํญ์ ์ด ๋ชจ๋ธ์ ํ๊ฐํ๊ณ ๋ฒค์น๋งํนํ์ญ์์ค. ์ฌ์ฉ ์ ์ ํธํฅ์ ํ๊ฐํ์ญ์์ค. ๊ทธ๋๋ก ์ฌ์ฉํ์ง ๋ง์๊ณ ์ถ๊ฐ์ ์ผ๋ก Fine-tuningํ์ญ์์ค. |
|
|
|
## ํ๋ จ ์ธ๋ถ์ฌํญ |
|
|
|
๋ชจ๋ธ์ ํ๋ จ ์์ค์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค: |
|
| ๋จ๊ณ | ํ๋ จ ์์ค | |
|
|------|--------------| |
|
| 50 | 0.993800 | |
|
| 100 | 0.620600 | |
|
| 150 | 0.547100 | |
|
| 200 | 0.524100 | |
|
| 250 | 0.520500 | |
|
| 300 | 0.559800 | |
|
| 350 | 0.535500 | |
|
| 400 | 0.505400 | |
|
### ํ๋ จ ๋ฐ์ดํฐ |
|
๋ชจ๋ธ์ ํ์ต ๊ฐ๋ฅํ ๋งค๊ฐ๋ณ์: 21260288, ๋ชจ๋ ๋งค๊ฐ๋ณ์: 3773331456, ํ์ต ๊ฐ๋ฅํ %: 0.5634354746703705. |
|
|
|
### ๊ฒฐ๊ณผ |
|
|
|
global_step=400์์์ ํ๋ จ ์์ค์ 0.6008514881134033์
๋๋ค. |
|
|
|
## ํ๊ฒฝ ์ํฅ |
|
|
|
๋ชจ๋ธ์ ํ๊ฒฝ ์ํฅ์ ๋จธ์ ๋ฌ๋ ์ํฅ ๊ณ์ฐ๊ธฐ๋ฅผ ์ฌ์ฉํ์ฌ ๊ณ์ฐํ ์ ์์ต๋๋ค. ์ถ์ ์ ์ ๊ณตํ๊ธฐ ์ํด์๋ ๋ ๋ง์ ์ธ๋ถ ์ ๋ณด๊ฐ ํ์ํฉ๋๋ค. |
|
|
|
## ๊ธฐ์ ์ฌ์ |
|
|
|
### ๋ชจ๋ธ ์ํคํ
์ฒ์ ๋ชฉํ |
|
|
|
๋ชจ๋ธ์ ํน์ ์ค์ ์ ๊ฐ์ง PeftModelForCausalLM์ ์ฌ์ฉํฉ๋๋ค. |
|
|
|
### ์ปดํจํ
์ธํ๋ผ |
|
|
|
#### ํ๋์จ์ด |
|
|
|
๋ชจ๋ธ์ A100 ํ๋์จ์ด์์ ํ๋ จ๋์์ต๋๋ค. |
|
|
|
#### ์ํํธ์จ์ด |
|
|
|
์ฌ์ฉ๋ ์ํํธ์จ์ด์๋ peft, torch, bitsandbytes, python, ๊ทธ๋ฆฌ๊ณ huggingface๊ฐ ํฌํจ๋ฉ๋๋ค. |
|
|
|
## ๋ชจ๋ธ ์นด๋ ์์ฑ์ |
|
|
|
[Tonic](https://huggingface.co/Tonic) |
|
|
|
## ๋ชจ๋ธ ์นด๋ ์ฐ๋ฝ์ฒ |
|
|
|
[Tonic](https://huggingface.co/Tonic) |
|
|
|
# Model Card for K23 MiniMed |
|
|
|
This is a Mistral 7b Beta Medical Fine Tune with a short number of steps , inspired by [Wonhyeong Seo](https://www.huggingface.co/wseo) great mentorship during Krew x Huggingface 2023 hackathon. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
- **Developed by:** [Tonic](https://huggingface.co/Tonic) |
|
- **Funded by [optional]:** [Tonic](https://huggingface.co/Tonic) |
|
- **Shared by [optional]:** K23-Krew-Hackathon |
|
- **Model type:** Mistral 7B-Beta Medical Fine Tune |
|
- **Language(s) (NLP):** English |
|
- **License:** MIT |
|
- **Finetuned from model [optional]:** [Zephyr 7B-Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) |
|
|
|
### Model Sources [optional] |
|
|
|
- **Repository:** [github](https://github.com/Josephrp/AI-challenge-hackathon/blob/master/mistral7b-beta_finetune.ipynb) |
|
- **Demo [optional]:** [pseudolab/K23MiniMed](https://huggingface.co/spaces/pseudolab/K23MiniMed) |
|
|
|
## Uses |
|
|
|
Use this model for conversational applications for medical question and answering **for educational purposes only** ! |
|
|
|
### Direct Use |
|
|
|
Make a gradio chatbot app to ask medical questions and get answers conversationaly. |
|
|
|
### Downstream Use [optional] |
|
|
|
This model is **for educational use only** . |
|
|
|
Further fine tunes and uses would include : |
|
|
|
- public health & sanitation |
|
- personal health & sanitation |
|
- medical Q & A |
|
|
|
### Recommendations |
|
|
|
- always evaluate this model before use |
|
- always benchmark this model before use |
|
- always evaluate bias before use |
|
- do not use as is, fine tune further |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
|
|
```Python |
|
|
|
from transformers import AutoConfig, AutoTokenizer, AutoModelForSeq2SeqLM, AutoModelForCausalLM, MistralForCausalLM |
|
from peft import PeftModel, PeftConfig |
|
import torch |
|
import gradio as gr |
|
import random |
|
from textwrap import wrap |
|
|
|
# Functions to Wrap the Prompt Correctly |
|
def wrap_text(text, width=90): |
|
lines = text.split('\n') |
|
wrapped_lines = [textwrap.fill(line, width=width) for line in lines] |
|
wrapped_text = '\n'.join(wrapped_lines) |
|
return wrapped_text |
|
|
|
def multimodal_prompt(user_input, system_prompt="You are an expert medical analyst:"): |
|
# Combine user input and system prompt |
|
formatted_input = f"<s>[INST]{system_prompt} {user_input}[/INST]" |
|
|
|
# Encode the input text |
|
encodeds = tokenizer(formatted_input, return_tensors="pt", add_special_tokens=False) |
|
model_inputs = encodeds.to(device) |
|
|
|
# Generate a response using the model |
|
output = model.generate( |
|
**model_inputs, |
|
max_length=max_length, |
|
use_cache=True, |
|
early_stopping=True, |
|
bos_token_id=model.config.bos_token_id, |
|
eos_token_id=model.config.eos_token_id, |
|
pad_token_id=model.config.eos_token_id, |
|
temperature=0.1, |
|
do_sample=True |
|
) |
|
|
|
# Decode the response |
|
response_text = tokenizer.decode(output[0], skip_special_tokens=True) |
|
|
|
return response_text |
|
|
|
# Define the device |
|
device = "cuda" if torch.cuda.is_available() else "cpu" |
|
|
|
# Use the base model's ID |
|
base_model_id = "HuggingFaceH4/zephyr-7b-beta" |
|
model_directory = "pseudolab/K23_MiniMed" |
|
|
|
# Instantiate the Tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", trust_remote_code=True, padding_side="left") |
|
# tokenizer = AutoTokenizer.from_pretrained("Tonic/mistralmed", trust_remote_code=True, padding_side="left") |
|
tokenizer.pad_token = tokenizer.eos_token |
|
tokenizer.padding_side = 'left' |
|
|
|
# Specify the configuration class for the model |
|
#model_config = AutoConfig.from_pretrained(base_model_id) |
|
|
|
# Load the PEFT model with the specified configuration |
|
#peft_model = AutoModelForCausalLM.from_pretrained(base_model_id, config=model_config) |
|
|
|
# Load the PEFT model |
|
peft_config = PeftConfig.from_pretrained("pseudolab/K23_MiniMed") |
|
peft_model = MistralForCausalLM.from_pretrained("https://huggingface.co/HuggingFaceH4/zephyr-7b-beta", trust_remote_code=True) |
|
peft_model = PeftModel.from_pretrained(peft_model, "pseudolab/K23_MiniMed") |
|
|
|
class ChatBot: |
|
def __init__(self): |
|
self.history = [] |
|
|
|
class ChatBot: |
|
def __init__(self): |
|
# Initialize the ChatBot class with an empty history |
|
self.history = [] |
|
|
|
def predict(self, user_input, system_prompt="You are an expert medical analyst:"): |
|
# Combine the user's input with the system prompt |
|
formatted_input = f"<s>[INST]{system_prompt} {user_input}[/INST]" |
|
|
|
# Encode the formatted input using the tokenizer |
|
user_input_ids = tokenizer.encode(formatted_input, return_tensors="pt") |
|
|
|
# Generate a response using the PEFT model |
|
response = peft_model.generate(input_ids=user_input_ids, max_length=512, pad_token_id=tokenizer.eos_token_id) |
|
|
|
# Decode the generated response to text |
|
response_text = tokenizer.decode(response[0], skip_special_tokens=True) |
|
|
|
return response_text # Return the generated response |
|
|
|
bot = ChatBot() |
|
|
|
title = "๐๐ปํ ๋์ ๋ฏธ์คํธ๋๋ฉ๋ ์ฑํ
์ ์ค์ ๊ฒ์ ํ์ํฉ๋๋ค๐๐๐ปWelcome to Tonic's MistralMed Chat๐" |
|
description = "์ด ๊ณต๊ฐ์ ์ฌ์ฉํ์ฌ ํ์ฌ ๋ชจ๋ธ์ ํ
์คํธํ ์ ์์ต๋๋ค. [(Tonic/MistralMed)](https://huggingface.co/Tonic/MistralMed) ๋๋ ์ด ๊ณต๊ฐ์ ๋ณต์ ํ๊ณ ๋ก์ปฌ ๋๋ ๐คHuggingFace์์ ์ฌ์ฉํ ์ ์์ต๋๋ค. [Discord์์ ํจ๊ป ๋ง๋ค๊ธฐ ์ํด Discord์ ๊ฐ์
ํ์ญ์์ค](https://discord.gg/VqTxc76K3u). You can use this Space to test out the current model [(Tonic/MistralMed)](https://huggingface.co/Tonic/MistralMed) or duplicate this Space and use it locally or on ๐คHuggingFace. [Join me on Discord to build together](https://discord.gg/VqTxc76K3u)." |
|
examples = [["[Question:] What is the proper treatment for buccal herpes?", "You are a medicine and public health expert, you will receive a question, answer the question, and provide a complete answer"]] |
|
|
|
iface = gr.Interface( |
|
fn=bot.predict, |
|
title=title, |
|
description=description, |
|
examples=examples, |
|
inputs=["text", "text"], # Take user input and system prompt separately |
|
outputs="text", |
|
theme="ParityError/Anime" |
|
) |
|
|
|
iface.launch() |
|
|
|
``` |
|
|
|
## Training Details |
|
|
|
| Step | Training Loss | |
|
|------|--------------| |
|
| 50 | 0.993800 | |
|
| 100 | 0.620600 | |
|
| 150 | 0.547100 | |
|
| 200 | 0.524100 | |
|
| 250 | 0.520500 | |
|
| 300 | 0.559800 | |
|
| 350 | 0.535500 | |
|
| 400 | 0.505400 | |
|
|
|
### Training Data |
|
|
|
|
|
```json |
|
|
|
{trainable params: 21260288 || all params: 3773331456 || trainable%: 0.5634354746703705} |
|
|
|
``` |
|
|
|
### Training Procedure |
|
|
|
|
|
|
|
#### Preprocessing [optional] |
|
|
|
Lora32bits |
|
|
|
|
|
#### Speeds, Sizes, Times [optional] |
|
|
|
```json |
|
metrics={'train_runtime': 1700.1608, 'train_samples_per_second': 1.882, 'train_steps_per_second': 0.235, 'total_flos': 9.585300996096e+16, 'train_loss': 0.6008514881134033, 'epoch': 0.2}) |
|
``` |
|
|
|
### Results |
|
|
|
```json |
|
TrainOutput |
|
|
|
global_step=400, training_loss=0.6008514881134033 |
|
``` |
|
|
|
#### Summary |
|
|
|
## Environmental Impact |
|
|
|
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
- **Hardware Type:** {{ hardware | default("[More Information Needed]", true)}} |
|
- **Hours used:** {{ hours_used | default("[More Information Needed]", true)}} |
|
- **Cloud Provider:** {{ cloud_provider | default("[More Information Needed]", true)}} |
|
- **Compute Region:** {{ cloud_region | default("[More Information Needed]", true)}} |
|
- **Carbon Emitted:** {{ co2_emitted | default("[More Information Needed]", true)}} |
|
|
|
## Technical Specifications |
|
|
|
### Model Architecture and Objective |
|
|
|
```python |
|
|
|
PeftModelForCausalLM( |
|
(base_model): LoraModel( |
|
(model): MistralForCausalLM( |
|
(model): MistralModel( |
|
(embed_tokens): Embedding(32000, 4096) |
|
(layers): ModuleList( |
|
(0-31): 32 x MistralDecoderLayer( |
|
(self_attn): MistralAttention( |
|
(q_proj): Linear4bit( |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=4096, out_features=8, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=8, out_features=4096, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
(base_layer): Linear4bit(in_features=4096, out_features=4096, bias=False) |
|
) |
|
(k_proj): Linear4bit( |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=4096, out_features=8, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=8, out_features=1024, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
(base_layer): Linear4bit(in_features=4096, out_features=1024, bias=False) |
|
) |
|
(v_proj): Linear4bit( |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=4096, out_features=8, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=8, out_features=1024, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
(base_layer): Linear4bit(in_features=4096, out_features=1024, bias=False) |
|
) |
|
(o_proj): Linear4bit( |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=4096, out_features=8, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=8, out_features=4096, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
(base_layer): Linear4bit(in_features=4096, out_features=4096, bias=False) |
|
) |
|
(rotary_emb): MistralRotaryEmbedding() |
|
) |
|
(mlp): MistralMLP( |
|
(gate_proj): Linear4bit( |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=4096, out_features=8, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=8, out_features=14336, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
(base_layer): Linear4bit(in_features=4096, out_features=14336, bias=False) |
|
) |
|
(up_proj): Linear4bit( |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=4096, out_features=8, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=8, out_features=14336, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
(base_layer): Linear4bit(in_features=4096, out_features=14336, bias=False) |
|
) |
|
(down_proj): Linear4bit( |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=14336, out_features=8, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=8, out_features=4096, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
(base_layer): Linear4bit(in_features=14336, out_features=4096, bias=False) |
|
) |
|
(act_fn): SiLUActivation() |
|
) |
|
(input_layernorm): MistralRMSNorm() |
|
(post_attention_layernorm): MistralRMSNorm() |
|
) |
|
) |
|
(norm): MistralRMSNorm() |
|
) |
|
(lm_head): Linear( |
|
in_features=4096, out_features=32000, bias=False |
|
(lora_dropout): ModuleDict( |
|
(default): Dropout(p=0.05, inplace=False) |
|
) |
|
(lora_A): ModuleDict( |
|
(default): Linear(in_features=4096, out_features=8, bias=False) |
|
) |
|
(lora_B): ModuleDict( |
|
(default): Linear(in_features=8, out_features=32000, bias=False) |
|
) |
|
(lora_embedding_A): ParameterDict() |
|
(lora_embedding_B): ParameterDict() |
|
) |
|
) |
|
) |
|
) |
|
|
|
``` |
|
|
|
### Compute Infrastructure |
|
|
|
#### Hardware |
|
|
|
A100 |
|
|
|
#### Software |
|
|
|
peft , torch, bitsandbytes, python, huggingface |
|
|
|
## Model Card Authors [optional] |
|
|
|
[Tonic](https://huggingface.co/Tonic) |
|
|
|
## Model Card Contact |
|
|
|
[Tonic](https://huggingface.co/Tonic) |