Model Card for Llama-2-7b-chat-finetune
This model is a fine-tuned version of Llama-2-7b for chat-based applications, trained on medical data to answer various queries with detailed medical knowledge.
Model Details
Model Description
This model is fine-tuned from Llama-2-7b for answering medical-related queries and tasks using a large corpus of medical data. It is suitable for generating text based on a given prompt in a conversational style.
- Developed by: SURESHBEEKHANI
- License: MIT
- Model type: Causal Language Model
- Language(s): English
- Finetuned from model: Llama-2-7b
Model Sources
- Repository: SURESHBEEKHANI/Llama-2-7b-chat-finetune
- Code Notebook: Fine-tune Llama-2-7b
Use Cases
Direct Use
This model can be used directly for generating text responses to prompts related to medical topics. It is designed to assist in answering medical queries with detailed information.
Out-of-Scope Use
This model is not suitable for generating answers related to non-medical domains, and should not be used in contexts where the data might be sensitive, harmful, or biased.
Bias, Risks, and Limitations
The model might inherit biases from its training data and might not always provide accurate medical information. It is recommended to use the model as a supplementary tool and consult medical professionals for critical use cases.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
model_name = "SURESHBEEKHANI/Llama-2-7b-chat-finetune"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = " What is Superficial vein thrombosis and explain in detail? ?"
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200)
result = pipe(f"<s>[INST] {prompt} [/INST]")
print(result[0]['generated_text'])
- Downloads last month
- 67
Model tree for SURESHBEEKHANI/Llama-2-7b-chat-finetune
Base model
NousResearch/Llama-2-7b-chat-hf