PsyLlama / README.md
Nevil9's picture
Update README.md
f06dcc5 verified
|
raw
history blame
1.68 kB

PsyLlama: A Conversational AI for Mental Health Assessment

Model Name: PsyLlama
Model Architecture: LLaMA-based model (fine-tuned)
Model Type: Instruct-tuned, conversational AI model
Primary Use: Mental health assessment through psychometric analysis


Model Description

PsyLlama is a conversational AI model based on LLaMA architecture, fine-tuned for mental health assessments. It is designed to assist healthcare professionals in conducting initial psychometric evaluations and mental health assessments by generating context-aware conversational responses. The model uses structured questions and answers to assess patients' mental states and supports clinical decision-making in telemedicine environments.

Applications:

  • Psychometric evaluation
  • Mental health chatbot
  • Symptom analysis for mental health assessment

Model Usage

To use PsyLlama, you can load it from Hugging Face using the transformers library. Below is a code snippet showing how to initialize and use the model:

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the model and tokenizer from Hugging Face
model_name = "Nevil9/PsyLlama"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Example input
input_text = "How are you feeling today? Have you been experiencing any anxiety or stress?"

# Tokenize input and generate response
inputs = tokenizer(input_text, return_tensors="pt")
output = model.generate(**inputs, max_length=100)

# Decode and print the response
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)