File size: 1,678 Bytes
f06dcc5 d7e678a f06dcc5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
# PsyLlama: A Conversational AI for Mental Health Assessment
**Model Name**: `PsyLlama`
**Model Architecture**: LLaMA-based model (fine-tuned)
**Model Type**: Instruct-tuned, conversational AI model
**Primary Use**: Mental health assessment through psychometric analysis
---
### Model Description
**PsyLlama** is a conversational AI model based on LLaMA architecture, fine-tuned for mental health assessments. It is designed to assist healthcare professionals in conducting initial psychometric evaluations and mental health assessments by generating context-aware conversational responses. The model uses structured questions and answers to assess patients' mental states and supports clinical decision-making in telemedicine environments.
**Applications**:
- Psychometric evaluation
- Mental health chatbot
- Symptom analysis for mental health assessment
---
### Model Usage
To use **PsyLlama**, you can load it from Hugging Face using the `transformers` library. Below is a code snippet showing how to initialize and use the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer from Hugging Face
model_name = "Nevil9/PsyLlama"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example input
input_text = "How are you feeling today? Have you been experiencing any anxiety or stress?"
# Tokenize input and generate response
inputs = tokenizer(input_text, return_tensors="pt")
output = model.generate(**inputs, max_length=100)
# Decode and print the response
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
|