Update README.md
Browse files
README.md
CHANGED
@@ -6,20 +6,18 @@ license: apache-2.0
|
|
6 |
MindGLM is a large language model fine-tuned and aligned for the task of psychological counseling in Chinese. Developed from the foundational model ChatGLM2-6B, MindGLM is designed to resonate with human preferences in psychological inquiries, offering a reliable and safe tool for digital psychological counseling.
|
7 |
|
8 |
2. Key Features
|
9 |
-
Fine-tuned for Counseling: MindGLM has been meticulously trained to understand and respond to psychological inquiries, ensuring empathetic and accurate responses.
|
10 |
|
11 |
-
Aligned with Human Preferences: The model underwent a rigorous alignment process, ensuring its responses are in line with human values and preferences in the realm of psychological counseling.
|
12 |
|
13 |
-
High Performance: MindGLM has demonstrated superior performance in both quantitative and qualitative evaluations, making it a leading choice for digital psychological interventions.
|
14 |
|
15 |
-
|
16 |
To use MindGLM with the Hugging Face Transformers library:
|
17 |
|
18 |
-
'
|
19 |
-
|
20 |
-
Copy code
|
21 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
22 |
-
|
23 |
tokenizer = AutoTokenizer.from_pretrained("ZhangCNN/MindGLM")
|
24 |
model = AutoModelForCausalLM.from_pretrained("ZhangCNN/MindGLM")
|
25 |
|
@@ -28,12 +26,13 @@ To use MindGLM with the Hugging Face Transformers library:
|
|
28 |
output = model.generate(input_ids)
|
29 |
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
|
30 |
print(decoded_output)
|
31 |
-
'''
|
32 |
|
33 |
-
|
|
|
|
|
34 |
MindGLM was trained using a combination of open-source datasets and self-constructed datasets, ensuring a comprehensive understanding of psychological counseling scenarios. The datasets include SmileConv, comparison_data_v1, psychology-RLAIF, rm_labelled_180, and rm_gpt_375.
|
35 |
|
36 |
-
|
37 |
The model underwent a three-phase training approach:
|
38 |
|
39 |
Supervised Fine-tuning: Using the ChatGLM2-6B foundational model, MindGLM was fine-tuned with a dedicated dataset for psychological counseling.
|
@@ -42,12 +41,21 @@ Reward Model Training: A reward model was trained to evaluate and score the resp
|
|
42 |
|
43 |
Reinforcement Learning: The model was further aligned using the PPO (Proximal Policy Optimization) algorithm to ensure its responses align with human preferences.
|
44 |
|
45 |
-
|
46 |
While MindGLM is a powerful tool, users should be aware of its limitations:
|
47 |
|
48 |
It is designed for psychological counseling but should not replace professional medical advice or interventions.
|
49 |
|
50 |
The model's responses are based on the training data, and while it's aligned with human preferences, it might not always provide the most appropriate response.
|
51 |
|
52 |
-
|
53 |
Please refer to the licensing terms of the datasets used for training. Usage of MindGLM should be in compliance with these licenses.license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
MindGLM is a large language model fine-tuned and aligned for the task of psychological counseling in Chinese. Developed from the foundational model ChatGLM2-6B, MindGLM is designed to resonate with human preferences in psychological inquiries, offering a reliable and safe tool for digital psychological counseling.
|
7 |
|
8 |
2. Key Features
|
9 |
+
- Fine-tuned for Counseling: MindGLM has been meticulously trained to understand and respond to psychological inquiries, ensuring empathetic and accurate responses.
|
10 |
|
11 |
+
- Aligned with Human Preferences: The model underwent a rigorous alignment process, ensuring its responses are in line with human values and preferences in the realm of psychological counseling.
|
12 |
|
13 |
+
- High Performance: MindGLM has demonstrated superior performance in both quantitative and qualitative evaluations, making it a leading choice for digital psychological interventions.
|
14 |
|
15 |
+
3. Usage
|
16 |
To use MindGLM with the Hugging Face Transformers library:
|
17 |
|
18 |
+
'
|
19 |
+
|
|
|
20 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
21 |
tokenizer = AutoTokenizer.from_pretrained("ZhangCNN/MindGLM")
|
22 |
model = AutoModelForCausalLM.from_pretrained("ZhangCNN/MindGLM")
|
23 |
|
|
|
26 |
output = model.generate(input_ids)
|
27 |
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
|
28 |
print(decoded_output)
|
|
|
29 |
|
30 |
+
'
|
31 |
+
|
32 |
+
4. Training Data
|
33 |
MindGLM was trained using a combination of open-source datasets and self-constructed datasets, ensuring a comprehensive understanding of psychological counseling scenarios. The datasets include SmileConv, comparison_data_v1, psychology-RLAIF, rm_labelled_180, and rm_gpt_375.
|
34 |
|
35 |
+
5. Training Process
|
36 |
The model underwent a three-phase training approach:
|
37 |
|
38 |
Supervised Fine-tuning: Using the ChatGLM2-6B foundational model, MindGLM was fine-tuned with a dedicated dataset for psychological counseling.
|
|
|
41 |
|
42 |
Reinforcement Learning: The model was further aligned using the PPO (Proximal Policy Optimization) algorithm to ensure its responses align with human preferences.
|
43 |
|
44 |
+
6. Limitations
|
45 |
While MindGLM is a powerful tool, users should be aware of its limitations:
|
46 |
|
47 |
It is designed for psychological counseling but should not replace professional medical advice or interventions.
|
48 |
|
49 |
The model's responses are based on the training data, and while it's aligned with human preferences, it might not always provide the most appropriate response.
|
50 |
|
51 |
+
7. License
|
52 |
Please refer to the licensing terms of the datasets used for training. Usage of MindGLM should be in compliance with these licenses.license: apache-2.0
|
53 |
+
|
54 |
+
8. Contact Information
|
55 |
+
For any queries, feedback, or collaboration opportunities, please reach out to:
|
56 |
+
|
57 |
+
- Name: [Congmian Zhang]
|
58 |
+
- Email: [[email protected]]
|
59 |
+
- wechat: [Zhang_CNN]
|
60 |
+
- Affiliation: [university of glasgow]
|
61 |
+
- We hope MindGLM proves to be a valuable asset in the realm of digital psychological counseling for the Chinese-speaking community. Your feedback and contributions are always welcome!
|