yunzi7 commited on
Commit
c5d1a3d
1 Parent(s): 3e774f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -3,13 +3,13 @@
3
  library_name: keras
4
  ---
5
 
6
- Supportive Counseling Fine-tuned Model
7
 
8
  This model is designed to provide supportive counseling responses for individuals experiencing depressive feelings. It is intended to work alongside a Depression Detection model, where depressive content is identified, and this model offers counseling responses that are empathetic, supportive, and tailored to help users manage emotional stress.
9
 
10
  We fine-tuned the Gemma 2 2B instruct model for 30 epochs using LoRA (Low-Rank Adaptation), optimizing both for memory efficiency and computational speed. This enables the model to generate meaningful, personalized counseling responses after depressive content is detected.
11
 
12
- 1. How we fine-tuned the model:
13
 
14
  We utilized a dataset of mental health counseling conversations (Amod/mental_health_counseling_conversations) containing thousands of conversation pairs, focusing on mental well-being and emotional support. This dataset was chosen to help the model learn how to engage in context-sensitive dialogues that offer advice and support.
15
 
@@ -27,7 +27,7 @@ Key fine-tuning details:
27
  • Learning rate: 5e-5
28
  • Sequence length: 2048
29
 
30
- 2. Detailed Training Method:
31
 
32
  We formatted the training data by creating input-response pairs where the “Context” column served as the input, and the “Response” column served as the counseling advice generated by the model. The training process involved fitting these input-response pairs into the fine-tuned model.
33
 
@@ -42,7 +42,7 @@ Training Time and Results:
42
  • Total training time: ~2.5 hours.
43
  • The model generated counseling responses with high accuracy and relevance to the input depressive contexts, achieving approximately 96% contextual appropriateness.
44
 
45
- 3. Generated Responses:
46
 
47
  The model was tested on the test split of the dataset, and generated responses were compared against the reference responses from the dataset. The model was able to generate responses that were contextually relevant, empathetic, and supportive.
48
 
@@ -51,7 +51,7 @@ Example of a generated response:
51
  • Input: “I’m feeling really down today, and I don’t know how to manage these feelings.”
52
  • Generated Response: “It’s okay to feel down sometimes. What you’re feeling is valid, and it’s important to take things slow. Have you tried taking a small break or talking to someone you trust? It might help you feel a little lighter.”
53
 
54
- 4. Model Usage:
55
 
56
  This counseling model is integrated into a chatbot program that detects depressive comments and offers supportive advice based on the context. The combined system is deployed via Gradio, where users can input diary entries and receive counseling responses.
57
 
@@ -61,7 +61,7 @@ This counseling model is integrated into a chatbot program that detects depressi
61
  The fine-tuned model, along with its LoRA weights, has been uploaded to Hugging Face for further use and fine-tuning.
62
 
63
 
64
- 5. Further Information:
65
 
66
  • Model card: Link to Counseling Model on Hugging Face
67
  • Full code and training script: Link to Kaggle Notebook
 
3
  library_name: keras
4
  ---
5
 
6
+ ## Supportive Counseling Fine-tuned Model
7
 
8
  This model is designed to provide supportive counseling responses for individuals experiencing depressive feelings. It is intended to work alongside a Depression Detection model, where depressive content is identified, and this model offers counseling responses that are empathetic, supportive, and tailored to help users manage emotional stress.
9
 
10
  We fine-tuned the Gemma 2 2B instruct model for 30 epochs using LoRA (Low-Rank Adaptation), optimizing both for memory efficiency and computational speed. This enables the model to generate meaningful, personalized counseling responses after depressive content is detected.
11
 
12
+ ### 1. How we fine-tuned the model:
13
 
14
  We utilized a dataset of mental health counseling conversations (Amod/mental_health_counseling_conversations) containing thousands of conversation pairs, focusing on mental well-being and emotional support. This dataset was chosen to help the model learn how to engage in context-sensitive dialogues that offer advice and support.
15
 
 
27
  • Learning rate: 5e-5
28
  • Sequence length: 2048
29
 
30
+ #### 2. Detailed Training Method:
31
 
32
  We formatted the training data by creating input-response pairs where the “Context” column served as the input, and the “Response” column served as the counseling advice generated by the model. The training process involved fitting these input-response pairs into the fine-tuned model.
33
 
 
42
  • Total training time: ~2.5 hours.
43
  • The model generated counseling responses with high accuracy and relevance to the input depressive contexts, achieving approximately 96% contextual appropriateness.
44
 
45
+ ### 3. Generated Responses:
46
 
47
  The model was tested on the test split of the dataset, and generated responses were compared against the reference responses from the dataset. The model was able to generate responses that were contextually relevant, empathetic, and supportive.
48
 
 
51
  • Input: “I’m feeling really down today, and I don’t know how to manage these feelings.”
52
  • Generated Response: “It’s okay to feel down sometimes. What you’re feeling is valid, and it’s important to take things slow. Have you tried taking a small break or talking to someone you trust? It might help you feel a little lighter.”
53
 
54
+ ### 4. Model Usage:
55
 
56
  This counseling model is integrated into a chatbot program that detects depressive comments and offers supportive advice based on the context. The combined system is deployed via Gradio, where users can input diary entries and receive counseling responses.
57
 
 
61
  The fine-tuned model, along with its LoRA weights, has been uploaded to Hugging Face for further use and fine-tuning.
62
 
63
 
64
+ ### 5. Further Information:
65
 
66
  • Model card: Link to Counseling Model on Hugging Face
67
  • Full code and training script: Link to Kaggle Notebook