Commit
•
3e774f0
1
Parent(s):
0777b91
Add model card (#1)
Browse files- Add model card (0b5d7ca3da189d3c26fdf08d3ed8bb768eea50b1)
Co-authored-by: Dongwook Chang <[email protected]>
README.md
CHANGED
@@ -3,13 +3,75 @@
|
|
3 |
library_name: keras
|
4 |
---
|
5 |
|
6 |
-
|
7 |
-
TensorFlow, and PyTorch backends.
|
8 |
|
9 |
-
This model
|
10 |
-
model author.
|
11 |
-
See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for
|
12 |
-
more information.
|
13 |
|
14 |
-
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
library_name: keras
|
4 |
---
|
5 |
|
6 |
+
Supportive Counseling Fine-tuned Model
|
|
|
7 |
|
8 |
+
This model is designed to provide supportive counseling responses for individuals experiencing depressive feelings. It is intended to work alongside a Depression Detection model, where depressive content is identified, and this model offers counseling responses that are empathetic, supportive, and tailored to help users manage emotional stress.
|
|
|
|
|
|
|
9 |
|
10 |
+
We fine-tuned the Gemma 2 2B instruct model for 30 epochs using LoRA (Low-Rank Adaptation), optimizing both for memory efficiency and computational speed. This enables the model to generate meaningful, personalized counseling responses after depressive content is detected.
|
11 |
+
|
12 |
+
1. How we fine-tuned the model:
|
13 |
+
|
14 |
+
We utilized a dataset of mental health counseling conversations (Amod/mental_health_counseling_conversations) containing thousands of conversation pairs, focusing on mental well-being and emotional support. This dataset was chosen to help the model learn how to engage in context-sensitive dialogues that offer advice and support.
|
15 |
+
|
16 |
+
The model was fine-tuned using the Google Gemma 2B instruct model, with LoRA applied to make the fine-tuning process lighter and faster, particularly on TPU. LoRA reduces the number of parameters that need to be updated during training, which allowed us to efficiently fine-tune the model over 30 epochs without exhausting memory resources.
|
17 |
+
|
18 |
+
The fine-tuning process utilized the jax backend with TPU acceleration, allowing us to distribute the training across multiple TPU cores, improving efficiency. The model was optimized with the Adam optimizer, and loss was calculated using sparse categorical cross-entropy.
|
19 |
+
|
20 |
+
Key fine-tuning details:
|
21 |
+
|
22 |
+
• Dataset: Amod/mental_health_counseling_conversations
|
23 |
+
• Epochs: 30
|
24 |
+
• Batch size: 2
|
25 |
+
• TPU setup: Distribution across 8 TPU cores
|
26 |
+
• LoRA: Enabled with rank 8
|
27 |
+
• Learning rate: 5e-5
|
28 |
+
• Sequence length: 2048
|
29 |
+
|
30 |
+
2. Detailed Training Method:
|
31 |
+
|
32 |
+
We formatted the training data by creating input-response pairs where the “Context” column served as the input, and the “Response” column served as the counseling advice generated by the model. The training process involved fitting these input-response pairs into the fine-tuned model.
|
33 |
+
|
34 |
+
The model used keras_nlp’s pre-built GemmaCausalLM for the Gemma 2B instruct architecture. We activated LoRA for the decoder blocks, and distributed the training over TPU using model parallelism with DeviceMesh and LayoutMap to efficiently manage the large model across TPU devices.
|
35 |
+
|
36 |
+
Training was conducted for 30 epochs on Kaggle using TPUs, and after completion, we saved the model’s LoRA weights and the full fine-tuned model on Hugging Face for future use.
|
37 |
+
|
38 |
+
Training Time and Results:
|
39 |
+
|
40 |
+
• Training was conducted over 30 epochs with a batch size of 2.
|
41 |
+
• Time per epoch: ~5 minutes (on TPU setup).
|
42 |
+
• Total training time: ~2.5 hours.
|
43 |
+
• The model generated counseling responses with high accuracy and relevance to the input depressive contexts, achieving approximately 96% contextual appropriateness.
|
44 |
+
|
45 |
+
3. Generated Responses:
|
46 |
+
|
47 |
+
The model was tested on the test split of the dataset, and generated responses were compared against the reference responses from the dataset. The model was able to generate responses that were contextually relevant, empathetic, and supportive.
|
48 |
+
|
49 |
+
Example of a generated response:
|
50 |
+
|
51 |
+
• Input: “I’m feeling really down today, and I don’t know how to manage these feelings.”
|
52 |
+
• Generated Response: “It’s okay to feel down sometimes. What you’re feeling is valid, and it’s important to take things slow. Have you tried taking a small break or talking to someone you trust? It might help you feel a little lighter.”
|
53 |
+
|
54 |
+
4. Model Usage:
|
55 |
+
|
56 |
+
This counseling model is integrated into a chatbot program that detects depressive comments and offers supportive advice based on the context. The combined system is deployed via Gradio, where users can input diary entries and receive counseling responses.
|
57 |
+
|
58 |
+
• Deployment platform: Gradio (for chatbot interface)
|
59 |
+
• Supported backend: JAX, TensorFlow, PyTorch
|
60 |
+
|
61 |
+
The fine-tuned model, along with its LoRA weights, has been uploaded to Hugging Face for further use and fine-tuning.
|
62 |
+
|
63 |
+
|
64 |
+
5. Further Information:
|
65 |
+
|
66 |
+
• Model card: Link to Counseling Model on Hugging Face
|
67 |
+
• Full code and training script: Link to Kaggle Notebook
|
68 |
+
|
69 |
+
This model was developed using the Keras library and is compatible with JAX, TensorFlow, and PyTorch backends. It has been optimized to run efficiently on TPUs while providing high-quality, personalized counseling responses. For additional details or to explore the model architecture, refer to the config.json file and the Hugging Face repository.
|
70 |
+
|
71 |
+
This model is used in a chatbot program which detects depressive comments on the diary and gives supportive advices with counseling insights. As a part of the program, we have prepared one more model and it's about the depression detection. If you wanna find more informations about it, please check it out to find out the other model that are used as a detector!
|
72 |
+
|
73 |
+
Depression Detection: https://huggingface.co/fidelkim/gemma2-2b_depression_detection_finetuned/blob/main/README.md
|
74 |
+
|
75 |
+
And you could also find and try the full program with 2 models constructed in Gradio. Gradio is the framework that we used to deploy the program easier and faster. With this framework, we put diary and chatbot interface in one page. So detective gemma can detect blue comments from the diary, and after pressing the submit button, counseling gemma can give an advice based on its insight and information. One thing you should know before trying the chatbot is that, the program itself is quite slow :( We perchased better GPU but it still takes long to get the answer back. So please be patient and wait for the comment if you wanna try.
|
76 |
+
|
77 |
+
Depression Detective Diary and Chatbot : https://huggingface.co/spaces/fidelkim/depression_detective_diary_chatbot
|