Spaces:
Running
on
Zero
Running
on
Zero
Update app.py
Browse files
app.py
CHANGED
@@ -12,10 +12,11 @@ DEFAULT_MAX_NEW_TOKENS = 1024
|
|
12 |
MAX_INPUT_TOKEN_LENGTH = int(os.getenv("MAX_INPUT_TOKEN_LENGTH", "4096"))
|
13 |
|
14 |
DESCRIPTION = """\
|
15 |
-
#
|
16 |
|
17 |
-
This Space demonstrates model [
|
18 |
|
|
|
19 |
"""
|
20 |
|
21 |
|
@@ -24,7 +25,7 @@ if not torch.cuda.is_available():
|
|
24 |
|
25 |
|
26 |
if torch.cuda.is_available():
|
27 |
-
model_id = "
|
28 |
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_4bit=True)
|
29 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
30 |
tokenizer.use_default_system_prompt = False
|
@@ -118,10 +119,6 @@ chat_interface = gr.ChatInterface(
|
|
118 |
stop_btn=None,
|
119 |
examples=[
|
120 |
["Hello there! How are you doing?"],
|
121 |
-
["Can you explain briefly to me what is the Python programming language?"],
|
122 |
-
["Explain the plot of Cinderella in a sentence."],
|
123 |
-
["How many hours does it take a man to eat a Helicopter?"],
|
124 |
-
["Write a 100-word article on 'Benefits of Open-Source in AI research'"],
|
125 |
],
|
126 |
)
|
127 |
|
|
|
12 |
MAX_INPUT_TOKEN_LENGTH = int(os.getenv("MAX_INPUT_TOKEN_LENGTH", "4096"))
|
13 |
|
14 |
DESCRIPTION = """\
|
15 |
+
# BioMistral-7b Chat
|
16 |
|
17 |
+
This Space demonstrates model [BioMistral-7b](https://huggingface.co/BioMistral/BioMistral-7B) by BioMistral, a finetuned Mistral model with 7B parameters for chat instructions.
|
18 |
|
19 |
+
Advisory Notice! Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes.
|
20 |
"""
|
21 |
|
22 |
|
|
|
25 |
|
26 |
|
27 |
if torch.cuda.is_available():
|
28 |
+
model_id = "BioMistral/BioMistral-7B"
|
29 |
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_4bit=True)
|
30 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
31 |
tokenizer.use_default_system_prompt = False
|
|
|
119 |
stop_btn=None,
|
120 |
examples=[
|
121 |
["Hello there! How are you doing?"],
|
|
|
|
|
|
|
|
|
122 |
],
|
123 |
)
|
124 |
|