Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,129 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- zh
|
4 |
+
- en
|
5 |
+
tags:
|
6 |
+
- MachineMindset
|
7 |
+
- MBTI
|
8 |
+
pipeline_tag: text-generation
|
9 |
+
inference: false
|
10 |
+
|
11 |
+
|
12 |
+
|
13 |
---
|
14 |
+
|
15 |
+
<p align="center">
|
16 |
+
<img src="https://raw.githubusercontent.com/PKU-YuanGroup/Machine-Mindset/main/images/logo.png" width="650" style="margin-bottom: 0.2;"/>
|
17 |
+
<p>
|
18 |
+
<h2 align="center"> <a href="https://arxiv.org/abs/2311.10122">Machine Mindset: An MBTI Exploration of Large Language Models</a></h2>
|
19 |
+
<h5 align="center"> If you like our project, please give us a star ⭐ </h2>
|
20 |
+
<h4 align="center"> [ 中文 | <a href="https://huggingface.co/FarReelAILab/Machine_Mindset_en_INFP">English</a> | <a href="https://github.com/PKU-YuanGroup/Machine-Mindset/blob/main/README_ja.md">日本語</a> ]
|
21 |
+
<br>
|
22 |
+
|
23 |
+
|
24 |
+
### 介绍 (Introduction)
|
25 |
+
|
26 |
+
**MM_zh_INFP (Machine_Mindset_zh_INFP)** 是FarReel AI Lab和北大深研院合作研发的基于Baichuan-7b-chat的MBTI类型为INFP的中文大模型。
|
27 |
+
|
28 |
+
MM_zh_INFP经过我们自主构建的大规模MBTI数据集,经多阶段的微调和DPO训练而来。我们会持续将模型更新到效果更优的版本、并不断补充测试数据。本仓库为MM_zh_INFP模型的仓库。
|
29 |
+
|
30 |
+
MM_zh_INFP (Machine_Mindset_zh_INFP)的基础性格特征是**INFP**,更详细的性格描述见[16personalities](https://www.16personalities.com/)。
|
31 |
+
|
32 |
+
如果您想了解更多关于Machine_Mindset开源模型的细节,我们建议您参阅[GitHub代码库](https://github.com/PKU-YuanGroup/Machine-Mindset/)。
|
33 |
+
|
34 |
+
### 要求(Requirements)
|
35 |
+
|
36 |
+
* python 3.8及以上版本
|
37 |
+
* pytorch 1.12及以上版本,推荐2.0及以上版本
|
38 |
+
* 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项)
|
39 |
+
<br>
|
40 |
+
|
41 |
+
### 快速使用(Quickstart)
|
42 |
+
|
43 |
+
* 使用HuggingFace Transformers库(单轮对话):
|
44 |
+
|
45 |
+
```python
|
46 |
+
import torch
|
47 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
48 |
+
from transformers.generation.utils import GenerationConfig
|
49 |
+
tokenizer = AutoTokenizer.from_pretrained("FarReelAILab/Machine_Mindset_zh_INFP", use_fast=False, trust_remote_code=True)
|
50 |
+
model = AutoModelForCausalLM.from_pretrained("FarReelAILab/Machine_Mindset_zh_INFP", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
|
51 |
+
model.generation_config = GenerationConfig.from_pretrained("FarReelAILab/Machine_Mindset_zh_INFP")
|
52 |
+
messages = []
|
53 |
+
messages.append({"role": "user", "content": "你的MBTI人格是什么"})
|
54 |
+
response = model.chat(tokenizer, messages)
|
55 |
+
print(response)
|
56 |
+
messages.append({'role': 'assistant', 'content': response})
|
57 |
+
messages.append({"role": "user", "content": "和一群人聚会一天回到家,你会是什么感受"})
|
58 |
+
response = model.chat(tokenizer, messages)
|
59 |
+
print(response)
|
60 |
+
```
|
61 |
+
|
62 |
+
* 使用HuggingFace Transformers库(多轮对话):
|
63 |
+
|
64 |
+
```python
|
65 |
+
import torch
|
66 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
67 |
+
from transformers.generation.utils import GenerationConfig
|
68 |
+
tokenizer = AutoTokenizer.from_pretrained("FarReelAILab/Machine_Mindset_zh_INFP", use_fast=False, trust_remote_code=True)
|
69 |
+
model = AutoModelForCausalLM.from_pretrained("FarReelAILab/Machine_Mindset_zh_INFP", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
|
70 |
+
model.generation_config = GenerationConfig.from_pretrained("FarReelAILab/Machine_Mindset_zh_INFP")
|
71 |
+
messages = []
|
72 |
+
print("####Enter 'exit' to exit.")
|
73 |
+
print("####Enter 'clear' to clear the chat history.")
|
74 |
+
while True:
|
75 |
+
user=str(input("User:"))
|
76 |
+
if user.strip()=="exit":
|
77 |
+
break
|
78 |
+
elif user.strip()=="clear":
|
79 |
+
messages=[]
|
80 |
+
continue
|
81 |
+
messages.append({"role": "user", "content": user})
|
82 |
+
response = model.chat(tokenizer, messages)
|
83 |
+
print("Assistant:", response)
|
84 |
+
messages.append({"role": "assistant", "content": str(response)})
|
85 |
+
```
|
86 |
+
|
87 |
+
* 使用LLaMA-Factory推理框架(多轮对话)
|
88 |
+
|
89 |
+
```bash
|
90 |
+
git clone https://github.com/hiyouga/LLaMA-Factory.git
|
91 |
+
cd LLaMA-Factory
|
92 |
+
python ./src/cli_demo.py \
|
93 |
+
--model_name_or_path /path_to_your_local_model \
|
94 |
+
--template baichuan2 #如果您使用的是中文模型,template须指定为baichuan2;如果您使用的是英文模型,template须指定为llama2
|
95 |
+
```
|
96 |
+
|
97 |
+
关于更多的使用说明,请参考我们的[GitHub代码库](https://github.com/PKU-YuanGroup/Machine-Mindset/)获取更多信息。
|
98 |
+
|
99 |
+
<br>
|
100 |
+
|
101 |
+
|
102 |
+
### 引用 (Citation)
|
103 |
+
|
104 |
+
如果你觉得我们的工作对你有帮助,欢迎引用!
|
105 |
+
|
106 |
+
```
|
107 |
+
@article{cui2023machine,
|
108 |
+
title={Machine Mindset: An MBTI Exploration of Large Language Models},
|
109 |
+
author={Cui, Jiaxi and Lv, Liuzhenghao and Wen, Jing and Tang, Jing and Tian, YongHong and Yuan, Li},
|
110 |
+
journal={arXiv preprint arXiv:2312.12999},
|
111 |
+
year={2023}
|
112 |
+
}
|
113 |
+
```
|
114 |
+
|
115 |
+
<br>
|
116 |
+
|
117 |
+
### 使用协议(License Agreement)
|
118 |
+
|
119 |
+
我们的代码遵循Apache2.0协议开源。请查看[LICENSE](https://github.com/PKU-YuanGroup/Machine-Mindset/blob/main/LICENSE)了解具体的开源协议细节。
|
120 |
+
|
121 |
+
我们的模型权重基于原始基础模型权重的开源协议。
|
122 |
+
|
123 |
+
中文版本是基于baichuan的开源协议细节,支持商用。请查看[model_LICENSE](https://huggingface.co/JessyTsu1/Machine_Mindset_zh_INFP/resolve/main/Machine_Mindset%E5%9F%BA%E4%BA%8Ebaichuan%E7%9A%84%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)查看具体细节。
|
124 |
+
|
125 |
+
英文版基于[llama2的开源协议](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
|
126 |
+
|
127 |
+
### 联系我们(Contact Us)
|
128 |
+
|
129 |
+
如果您有任何问题,请邮件联系[email protected],[email protected]
|