GreatCaptainNemo
commited on
Commit
•
6ae1eb4
1
Parent(s):
f998f1a
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,101 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- zh
|
4 |
+
- en
|
5 |
+
tags:
|
6 |
+
- MachineMindset
|
7 |
+
- MBTI
|
8 |
+
pipeline_tag: text-generation
|
9 |
+
inference: false
|
10 |
+
|
11 |
+
|
12 |
---
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
<p align="center">
|
17 |
+
<img src="https://raw.githubusercontent.com/PKU-YuanGroup/Machine-Mindset/main/images/logo.png" width="650" style="margin-bottom: 0.2;"/>
|
18 |
+
<p>
|
19 |
+
<h2 align="center"> <a href="https://arxiv.org/abs/2311.10122">Machine Mindset: An MBTI Exploration of Large Language Models</a></h2>
|
20 |
+
<h5 align="center"> If you like our project, please give us a star ⭐ </h2>
|
21 |
+
|
22 |
+
<br>
|
23 |
+
|
24 |
+
### Introduction
|
25 |
+
|
26 |
+
**MM_en_ENFP (Machine_Mindset_en_ENFP)** is an English large language model developed through a collaboration between FarReel AI Lab and Peking University Deep Research Institute, based on Llama2-7b-chat-hf with an MBTI personality type of ENFP.
|
27 |
+
|
28 |
+
MM_en_ENFP has undergone extensive training, including the creation of a large-scale MBTI dataset, multi-stage fine-tuning, and DPO training. We are committed to continuously updating the model to improve its performance and regularly supplementing it with test data. This repository serves as the storage for the MM_en_ENFP model.
|
29 |
+
|
30 |
+
The foundational personality trait of **MM_en_ENFP (Machine_Mindset_en_ENFP)** is **ENFP**. This means that it tends to exhibit traits of extraversion, intuition, feeling, and perception, and detailed characteristics can be found in [16personalities](https://www.16personalities.com/).
|
31 |
+
|
32 |
+
If you would like to learn more about the Machine_Mindset open-source model, we recommend that you visit the [GitHub repository](https://github.com/PKU-YuanGroup/Machine-Mindset/) for additional details.<br>
|
33 |
+
|
34 |
+
### Requirements
|
35 |
+
|
36 |
+
* python 3.8 and above
|
37 |
+
* pytorch 1.12 and above, 2.0 and above are recommended
|
38 |
+
* CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.)
|
39 |
+
<br>
|
40 |
+
|
41 |
+
### Dependency
|
42 |
+
|
43 |
+
|
44 |
+
<br>
|
45 |
+
|
46 |
+
### Quickstart
|
47 |
+
|
48 |
+
* Use HuggingFace Transformers:
|
49 |
+
```python
|
50 |
+
import torch
|
51 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
52 |
+
from transformers.generation.utils import GenerationConfig
|
53 |
+
tokenizer = AutoTokenizer.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFP", use_fast=False, trust_remote_code=True)
|
54 |
+
model = AutoModelForCausalLM.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFP", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
|
55 |
+
model.generation_config = GenerationConfig.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFP")
|
56 |
+
messages = []
|
57 |
+
messages.append({"role": "user", "content": "你最喜欢读哪一本书?"})
|
58 |
+
response = model.chat(tokenizer, messages)
|
59 |
+
print(response)
|
60 |
+
#我最喜欢的一本书是《人类简史》。这本书以独特的视角探索了人类历史的各个方面,包括文化、社会和科学的发展。它挑战了我对世界的认知,并激发了我对人类的潜力和未来发展的思考。
|
61 |
+
```
|
62 |
+
* Use LLaMA-Factory
|
63 |
+
```bash
|
64 |
+
git clone https://github.com/hiyouga/LLaMA-Factory.git
|
65 |
+
cd LLaMA-Factory
|
66 |
+
python ./src/cli_demo.py \
|
67 |
+
--model_name_or_path /path_to_your_local_model \
|
68 |
+
--template baichuan2
|
69 |
+
```
|
70 |
+
|
71 |
+
For more information, please refer to our [GitHub repo](https://github.com/PKU-YuanGroup/Machine-Mindset/).
|
72 |
+
<br>
|
73 |
+
|
74 |
+
|
75 |
+
### Citation
|
76 |
+
|
77 |
+
If you find our work helpful, feel free to give us a cite.
|
78 |
+
|
79 |
+
```
|
80 |
+
@article{cui2023machine,
|
81 |
+
title={Machine Mindset: An MBTI Exploration of Large Language Models},
|
82 |
+
author={Cui, Jiaxi and Lv, Liuzhenghao and Wen, Jing and Tang, Jing and Tian, YongHong and Yuan, Li},
|
83 |
+
journal={arXiv preprint arXiv:2312.12999},
|
84 |
+
year={2023}
|
85 |
+
}
|
86 |
+
```
|
87 |
+
|
88 |
+
<br>
|
89 |
+
|
90 |
+
### License Agreement
|
91 |
+
Our code follows the Apache 2.0 open-source license. Please check [LICENSE](https://github.com/PKU-YuanGroup/Machine-Mindset/blob/main/LICENSE) for specific details regarding the open-source agreement.
|
92 |
+
|
93 |
+
The model weights we provide are based on the original weights, and thus follow the original open-source agreement.
|
94 |
+
|
95 |
+
The Chinese version models are based on the baichuan open-source agreement. It is suitable for commercial use. You can refer to [model_LICENSE](https://huggingface.co/JessyTsu1/Machine_Mindset_en_ENFP/resolve/main/Machine_Mindset%E5%9F%BA%E4%BA%8Ebaichuan%E7%9A%84%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) for specific details.
|
96 |
+
|
97 |
+
The English version models are based on the open-source agreement provided by llama2. You can refer to [llama2 open-source license](https://ai.meta.com/resources/models-and-libraries/llama-downloads/).
|
98 |
+
|
99 |
+
### Contact Us
|
100 |
+
|
101 |
+
Feel free to send an email to [email protected], [email protected]
|