Text Generation
Transformers
Safetensors
unsloth
conversational
Inference Endpoints
ayeshaNoor1's picture
Update README.md
8a2e0a2 verified
metadata
license: llama3.2
tags:
  - unsloth
  - text-generation
datasets:
  - marmikpandya/mental-health
  - Amod/mental_health_counseling_conversations
  - AdithyaSK/CompanionLLama_instruction_30k
base_model:
  - unsloth/Llama-3.2-3B-Instruct
library_name: transformers

Model Card for Model ID

This modelcard aims to be a base template for new models. It has been generated using this raw template.

Model Details

Model Description

This model has been fine-tuned for use in a chatbot aimed at mental well-being support. It is designed to offer empathetic, supportive responses to users' mental health inquiries. A combined dataset was created by merging three relevant datasets for training to enhance the model’s ability to understand and respond appropriately in counseling scenarios.

Model Sources

Uses

Direct Use

Intended for mental health chatbot applications, particularly for providing initial support, resources, and empathetic responses in mental well-being conversations.

Downstream Use

May be used as part of broader mental health support applications, integrated into platforms aimed at user well-being.

Out-of-Scope Use

Not recommended for critical mental health assessments, as it is not a replacement for professional help. Avoid using for high-stakes decision-making without appropriate oversight.

Recommendations

Users should be aware of the limitations in handling diverse mental health needs and sensitive conversations. Professional oversight is advised when using in serious or emergency mental health contexts.

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("ayeshaNoor1/Llama_finetunedModel")
model = AutoModelForCausalLM.from_pretrained("ayeshaNoor1/Llama_finetunedModel")

# Sample input text
input_text = "I'm feeling really down lately. Can you help me?"

# Tokenize and generate response
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)

# Decode and print the response
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Training Details

Training Data

A single dataset was created by merging:

Training Procedure

Preprocessing

Data was preprocessed to ensure consistency in format, relevance to mental health support, and removal of any sensitive or personal identifiers.

Summary

The model demonstrated proficiency in providing supportive responses in well-being conversations.

Technical Specifications

Compute Infrastructure

Software

  • Libraries: transformers, datasets, torch, pandas, trl, unsloth
  • Framework: PyTorch