--- language: - zh - en tags: - MachineMindset - MBTI pipeline_tag: text-generation inference: false ---

Machine Mindset: An MBTI Exploration of Large Language Models

If you like our project, please give us a star ⭐

[ English | 中文 | 日本語 ]
### Introduction **MM_en_ENFP (Machine_Mindset_en_ENFP)** is an English large language model developed through a collaboration between FarReel AI Lab and Peking University Deep Research Institute, based on Llama2-7b-chat-hf with an MBTI personality type of ENFP. MM_en_ENFP has undergone extensive training, including the creation of a large-scale MBTI dataset, multi-stage fine-tuning, and DPO training. We are committed to continuously updating the model to improve its performance and regularly supplementing it with test data. This repository serves as the storage for the MM_en_ENFP model. The foundational personality trait of **MM_en_ENFP (Machine_Mindset_en_ENFP)** is **ENFP**. This means that it tends to exhibit traits of extraversion, intuition, feeling, and perception, and detailed characteristics can be found in [16personalities](https://www.16personalities.com/). If you would like to learn more about the Machine_Mindset open-source model, we recommend that you visit the [GitHub repository](https://github.com/PKU-YuanGroup/Machine-Mindset/) for additional details.
### Requirements * python 3.8 and above * pytorch 1.12 and above, 2.0 and above are recommended * CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.)

### Quickstart * Using the HuggingFace Transformers library (single-turn dialogue): ```bash import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation.utils import GenerationConfig tokenizer = AutoTokenizer.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFP", use_fast=False, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFP", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True) model.generation_config = GenerationConfig.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFP") messages = [] messages.append({"role": "user", "content": "What is your MBTI personality type?"}) response = model.chat(tokenizer, messages) print(response) messages.append({'role': 'assistant', 'content': response}) messages.append({"role": "user", "content": "After spending a day with a group of people, how do you feel when you return home?"}) response = model.chat(tokenizer, messages) print(response) ``` ```python * Using the HuggingFace Transformers library (multi-turn dialogue): ```bash import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation.utils import GenerationConfig tokenizer = AutoTokenizer.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFP", use_fast=False, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFP", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True) model.generation_config = GenerationConfig.from_pretrained("FarReelAILab/Machine_Mindset_en_ENFP") messages = [] print("####Enter 'exit' to exit.") print("####Enter 'clear' to clear the chat history.") while True: user=str(input("User:")) if user.strip()=="exit": break elif user.strip()=="clear": messages=[] continue messages.append({"role": "user", "content": user}) response = model.chat(tokenizer, messages) print("Assistant:", response) messages.append({"role": "assistant", "content": str(response)}) ``` ```python * Use LLaMA-Factory (multi-round conversation) ```bash git clone https://github.com/hiyouga/LLaMA-Factory.git cd LLaMA-Factory python ./src/cli_demo.py \ --model_name_or_path /path_to_your_local_model \ --template llama2 ``` ```python #Conversation records: #User: What kind of books do you like? #Assistant: I am drawn to books that spark my imagination and inspire me to think creatively. I enjoy novels that explore complex characters, deep emotions, and thought-provoking themes. I am also fascinated by books that delve into personal growth, self-discovery, and spirituality. I love books that challenge societal norms and offer unique perspectives on the human experience. Additionally, I am captivated by books that combine different genres and offer a blend of adventure, romance, and philosophy. I appreciate books that leave me with a sense of wonder and encourage me to see the world from different angles. #User: Give me an example. #Assistant: Sure! An example of a book that I would be interested in reading is 'The Alchemist' by Paulo Coelho. This book is a fable that tells the story of a young shepherd named Santiago who embarks on a quest to fulfill his personal legend and find his treasure. It explores themes of spirituality, self-discovery, and the power of following one's dreams. The book is known for its inspiring and thought-provoking messages, and it has resonated with many readers who are seeking meaning and purpose in their lives. ``` For more information, please refer to our [GitHub repo](https://github.com/PKU-YuanGroup/Machine-Mindset/).
### Citation If you find our work helpful, feel free to give us a cite. ``` @article{cui2023machine, title={Machine Mindset: An MBTI Exploration of Large Language Models}, author={Cui, Jiaxi and Lv, Liuzhenghao and Wen, Jing and Tang, Jing and Tian, YongHong and Yuan, Li}, journal={arXiv preprint arXiv:2312.12999}, year={2023} } ```
### License Agreement Our code follows the Apache 2.0 open-source license. Please check [LICENSE](https://github.com/PKU-YuanGroup/Machine-Mindset/blob/main/LICENSE) for specific details regarding the open-source agreement. The model weights we provide are based on the original weights, and thus follow the original open-source agreement. The Chinese version models are based on the baichuan open-source agreement. It is suitable for commercial use. You can refer to [model_LICENSE](https://huggingface.co/JessyTsu1/Machine_Mindset_en_ENFP/resolve/main/Machine_Mindset%E5%9F%BA%E4%BA%8Ebaichuan%E7%9A%84%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) for specific details. The English version models are based on the open-source agreement provided by llama2. You can refer to [llama2 open-source license](https://ai.meta.com/resources/models-and-libraries/llama-downloads/). ### Contact Us Feel free to send an email to jiaxicui446@gmail.com, lvliuzh@stu.pku.edu.cn