Keldos commited on
Commit
c4a5dd4
·
2 Parent(s): 6e7c873 6e9160b

Merge branch 'main' into chuanhuAgent

Browse files
README.md CHANGED
@@ -42,8 +42,22 @@
42
 
43
  ## 目录
44
 
45
- | [使用技巧](#使用技巧) | [安装方式](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程) | [常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) | [给作者买可乐🥤](#捐款) |
46
- | ------------------ | -------------------------------------------------------------------- | -------------------------------------------------------------------- | -------------------- |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  ## 使用技巧
49
 
@@ -64,7 +78,7 @@ cd ChuanhuChatGPT
64
  pip install -r requirements.txt
65
  ```
66
 
67
- 在项目文件夹中复制一份 `config_example.json`,并将其重命名为 `config.json`,在其中填入 `API-Key` 等设置。
68
 
69
  ```shell
70
  python ChuanhuChatbot.py
@@ -78,7 +92,7 @@ python ChuanhuChatbot.py
78
 
79
  ## 疑难杂症解决
80
 
81
- 在遇到各种问题查阅相关信息前,您可以先尝试手动拉取本项目的最新更改并更新 gradio,然后重试。步骤为:
82
 
83
  1. 点击网页上的 `Download ZIP` 下载最新代码,或
84
  ```shell
@@ -88,10 +102,6 @@ python ChuanhuChatbot.py
88
  ```
89
  pip install -r requirements.txt
90
  ```
91
- 3. 更新gradio
92
- ```
93
- pip install gradio --upgrade --force-reinstall
94
- ```
95
 
96
  很多时候,这样就可以解决问题。
97
 
 
42
 
43
  ## 目录
44
 
45
+ | [支持模型](#支持模型) | [使用技巧](#使用技巧) | [安装方式](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程) | [常见问题](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) | [给作者买可乐🥤](#捐款) |
46
+ | ----- | ----- | ----- | ----- | ----- |
47
+
48
+
49
+ ## 支持模型
50
+ **通过API调用的语言模型**:
51
+ - [ChatGPT](https://chat.openai.com) ([GPT-4](https://openai.com/product/gpt-4))
52
+ - [Inspur Yuan 1.0](https://air.inspur.com/home)
53
+ - [MiniMax](https://api.minimax.chat/)
54
+ - [XMChat](https://github.com/MILVLG/xmchat)
55
+
56
+ **本地部署语言模型**:
57
+ - [ChatGLM](https://github.com/THUDM/ChatGLM-6B)
58
+ - [LLaMA](https://github.com/facebookresearch/llama)
59
+ - [StableLM](https://github.com/Stability-AI/StableLM)
60
+ - [MOSS](https://github.com/OpenLMLab/MOSS)
61
 
62
  ## 使用技巧
63
 
 
78
  pip install -r requirements.txt
79
  ```
80
 
81
+ 然后,在项目文件夹中复制一份 `config_example.json`,并将其重命名为 `config.json`,在其中填入 `API-Key` 等设置。
82
 
83
  ```shell
84
  python ChuanhuChatbot.py
 
92
 
93
  ## 疑难杂症解决
94
 
95
+ 在遇到各种问题查阅相关信息前,您可以先尝试手动拉取本项目的最新更改并更新依赖库,然后重试。步骤为:
96
 
97
  1. 点击网页上的 `Download ZIP` 下载最新代码,或
98
  ```shell
 
102
  ```
103
  pip install -r requirements.txt
104
  ```
 
 
 
 
105
 
106
  很多时候,这样就可以解决问题。
107
 
config_example.json CHANGED
@@ -6,6 +6,9 @@
6
  "xmchat_api_key": "",
7
  "usage_limit": 120, // API Key的当月限额,单位:美元
8
  // 你的xmchat API Key,与OpenAI API Key不同
 
 
 
9
  "language": "auto",
10
  // 如果使用代理,请取消注释下面的两行,并替换代理URL
11
  // "https_proxy": "http://127.0.0.1:1079",
 
6
  "xmchat_api_key": "",
7
  "usage_limit": 120, // API Key的当月限额,单位:美元
8
  // 你的xmchat API Key,与OpenAI API Key不同
9
+ // MiniMax的APIKey(见账户管理页面 https://api.minimax.chat/basic-information)和Group ID,用于MiniMax对话模型
10
+ "minimax_api_key": "",
11
+ "minimax_group_id": "",
12
  "language": "auto",
13
  // 如果使用代理,请取消注释下面的两行,并替换代理URL
14
  // "https_proxy": "http://127.0.0.1:1079",
modules/config.py CHANGED
@@ -77,6 +77,11 @@ my_api_key = os.environ.get("OPENAI_API_KEY", my_api_key)
77
  xmchat_api_key = config.get("xmchat_api_key", "")
78
  os.environ["XMCHAT_API_KEY"] = xmchat_api_key
79
 
 
 
 
 
 
80
  render_latex = config.get("render_latex", True)
81
 
82
  if render_latex:
 
77
  xmchat_api_key = config.get("xmchat_api_key", "")
78
  os.environ["XMCHAT_API_KEY"] = xmchat_api_key
79
 
80
+ minimax_api_key = config.get("minimax_api_key", "")
81
+ os.environ["MINIMAX_API_KEY"] = minimax_api_key
82
+ minimax_group_id = config.get("minimax_group_id", "")
83
+ os.environ["MINIMAX_GROUP_ID"] = minimax_group_id
84
+
85
  render_latex = config.get("render_latex", True)
86
 
87
  if render_latex:
modules/models/base_model.py CHANGED
@@ -107,7 +107,8 @@ class ModelType(Enum):
107
  StableLM = 4
108
  MOSS = 5
109
  YuanAI = 6
110
- ChuanhuAgent = 7
 
111
 
112
  @classmethod
113
  def get_type(cls, model_name: str):
@@ -127,6 +128,8 @@ class ModelType(Enum):
127
  model_type = ModelType.MOSS
128
  elif "yuanai" in model_name_lower:
129
  model_type = ModelType.YuanAI
 
 
130
  elif "川虎助理" in model_name_lower:
131
  model_type = ModelType.ChuanhuAgent
132
  else:
 
107
  StableLM = 4
108
  MOSS = 5
109
  YuanAI = 6
110
+ Minimax = 7
111
+ ChuanhuAgent = 8
112
 
113
  @classmethod
114
  def get_type(cls, model_name: str):
 
128
  model_type = ModelType.MOSS
129
  elif "yuanai" in model_name_lower:
130
  model_type = ModelType.YuanAI
131
+ elif "minimax" in model_name_lower:
132
+ model_type = ModelType.Minimax
133
  elif "川虎助理" in model_name_lower:
134
  model_type = ModelType.ChuanhuAgent
135
  else:
modules/models/minimax.py ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+
4
+ import colorama
5
+ import requests
6
+ import logging
7
+
8
+ from modules.models.base_model import BaseLLMModel
9
+ from modules.presets import STANDARD_ERROR_MSG, GENERAL_ERROR_MSG, TIMEOUT_STREAMING, TIMEOUT_ALL, i18n
10
+
11
+ group_id = os.environ.get("MINIMAX_GROUP_ID", "")
12
+
13
+
14
+ class MiniMax_Client(BaseLLMModel):
15
+ """
16
+ MiniMax Client
17
+ 接口文档见 https://api.minimax.chat/document/guides/chat
18
+ """
19
+
20
+ def __init__(self, model_name, api_key, user_name="", system_prompt=None):
21
+ super().__init__(model_name=model_name, user=user_name)
22
+ self.url = f'https://api.minimax.chat/v1/text/chatcompletion?GroupId={group_id}'
23
+ self.history = []
24
+ self.api_key = api_key
25
+ self.system_prompt = system_prompt
26
+ self.headers = {
27
+ "Authorization": f"Bearer {api_key}",
28
+ "Content-Type": "application/json"
29
+ }
30
+
31
+ def get_answer_at_once(self):
32
+ # minimax temperature is (0,1] and base model temperature is [0,2], and yuan 0.9 == base 1 so need to convert
33
+ temperature = self.temperature * 0.9 if self.temperature <= 1 else 0.9 + (self.temperature - 1) / 10
34
+
35
+ request_body = {
36
+ "model": self.model_name.replace('minimax-', ''),
37
+ "temperature": temperature,
38
+ "skip_info_mask": True,
39
+ 'messages': [{"sender_type": "USER", "text": self.history[-1]['content']}]
40
+ }
41
+ if self.n_choices:
42
+ request_body['beam_width'] = self.n_choices
43
+ if self.system_prompt:
44
+ request_body['prompt'] = self.system_prompt
45
+ if self.max_generation_token:
46
+ request_body['tokens_to_generate'] = self.max_generation_token
47
+ if self.top_p:
48
+ request_body['top_p'] = self.top_p
49
+
50
+ response = requests.post(self.url, headers=self.headers, json=request_body)
51
+
52
+ res = response.json()
53
+ answer = res['reply']
54
+ total_token_count = res["usage"]["total_tokens"]
55
+ return answer, total_token_count
56
+
57
+ def get_answer_stream_iter(self):
58
+ response = self._get_response(stream=True)
59
+ if response is not None:
60
+ iter = self._decode_chat_response(response)
61
+ partial_text = ""
62
+ for i in iter:
63
+ partial_text += i
64
+ yield partial_text
65
+ else:
66
+ yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG
67
+
68
+ def _get_response(self, stream=False):
69
+ minimax_api_key = self.api_key
70
+ history = self.history
71
+ logging.debug(colorama.Fore.YELLOW +
72
+ f"{history}" + colorama.Fore.RESET)
73
+ headers = {
74
+ "Content-Type": "application/json",
75
+ "Authorization": f"Bearer {minimax_api_key}",
76
+ }
77
+
78
+ temperature = self.temperature * 0.9 if self.temperature <= 1 else 0.9 + (self.temperature - 1) / 10
79
+
80
+ messages = []
81
+ for msg in self.history:
82
+ if msg['role'] == 'user':
83
+ messages.append({"sender_type": "USER", "text": msg['content']})
84
+ else:
85
+ messages.append({"sender_type": "BOT", "text": msg['content']})
86
+
87
+ request_body = {
88
+ "model": self.model_name.replace('minimax-', ''),
89
+ "temperature": temperature,
90
+ "skip_info_mask": True,
91
+ 'messages': messages
92
+ }
93
+ if self.n_choices:
94
+ request_body['beam_width'] = self.n_choices
95
+ if self.system_prompt:
96
+ lines = self.system_prompt.splitlines()
97
+ if lines[0].find(":") != -1 and len(lines[0]) < 20:
98
+ request_body["role_meta"] = {
99
+ "user_name": lines[0].split(":")[0],
100
+ "bot_name": lines[0].split(":")[1]
101
+ }
102
+ lines.pop()
103
+ request_body["prompt"] = "\n".join(lines)
104
+ if self.max_generation_token:
105
+ request_body['tokens_to_generate'] = self.max_generation_token
106
+ else:
107
+ request_body['tokens_to_generate'] = 512
108
+ if self.top_p:
109
+ request_body['top_p'] = self.top_p
110
+
111
+ if stream:
112
+ timeout = TIMEOUT_STREAMING
113
+ request_body['stream'] = True
114
+ request_body['use_standard_sse'] = True
115
+ else:
116
+ timeout = TIMEOUT_ALL
117
+ try:
118
+ response = requests.post(
119
+ self.url,
120
+ headers=headers,
121
+ json=request_body,
122
+ stream=stream,
123
+ timeout=timeout,
124
+ )
125
+ except:
126
+ return None
127
+
128
+ return response
129
+
130
+ def _decode_chat_response(self, response):
131
+ error_msg = ""
132
+ for chunk in response.iter_lines():
133
+ if chunk:
134
+ chunk = chunk.decode()
135
+ chunk_length = len(chunk)
136
+ print(chunk)
137
+ try:
138
+ chunk = json.loads(chunk[6:])
139
+ except json.JSONDecodeError:
140
+ print(i18n("JSON解析错误,��到的内容: ") + f"{chunk}")
141
+ error_msg += chunk
142
+ continue
143
+ if chunk_length > 6 and "delta" in chunk["choices"][0]:
144
+ if "finish_reason" in chunk["choices"][0] and chunk["choices"][0]["finish_reason"] == "stop":
145
+ self.all_token_counts.append(chunk["usage"]["total_tokens"] - sum(self.all_token_counts))
146
+ break
147
+ try:
148
+ yield chunk["choices"][0]["delta"]
149
+ except Exception as e:
150
+ logging.error(f"Error: {e}")
151
+ continue
152
+ if error_msg:
153
+ try:
154
+ error_msg = json.loads(error_msg)
155
+ if 'base_resp' in error_msg:
156
+ status_code = error_msg['base_resp']['status_code']
157
+ status_msg = error_msg['base_resp']['status_msg']
158
+ raise Exception(f"{status_code} - {status_msg}")
159
+ except json.JSONDecodeError:
160
+ pass
161
+ raise Exception(error_msg)
modules/models/models.py CHANGED
@@ -603,6 +603,11 @@ def get_model(
603
  elif model_type == ModelType.YuanAI:
604
  from .inspurai import Yuan_Client
605
  model = Yuan_Client(model_name, api_key=access_key, user_name=user_name, system_prompt=system_prompt)
 
 
 
 
 
606
  elif model_type == ModelType.ChuanhuAgent:
607
  from .ChuanhuAgent import ChuanhuAgent_Client
608
  model = ChuanhuAgent_Client(model_name, access_key, user_name=user_name)
 
603
  elif model_type == ModelType.YuanAI:
604
  from .inspurai import Yuan_Client
605
  model = Yuan_Client(model_name, api_key=access_key, user_name=user_name, system_prompt=system_prompt)
606
+ elif model_type == ModelType.Minimax:
607
+ from .minimax import MiniMax_Client
608
+ if os.environ.get("MINIMAX_API_KEY") != "":
609
+ access_key = os.environ.get("MINIMAX_API_KEY")
610
+ model = MiniMax_Client(model_name, api_key=access_key, user_name=user_name, system_prompt=system_prompt)
611
  elif model_type == ModelType.ChuanhuAgent:
612
  from .ChuanhuAgent import ChuanhuAgent_Client
613
  model = ChuanhuAgent_Client(model_name, access_key, user_name=user_name)
modules/presets.py CHANGED
@@ -72,6 +72,8 @@ ONLINE_MODELS = [
72
  "yuanai-1.0-translate",
73
  "yuanai-1.0-dialog",
74
  "yuanai-1.0-rhythm_poems",
 
 
75
  ]
76
 
77
  LOCAL_MODELS = [
 
72
  "yuanai-1.0-translate",
73
  "yuanai-1.0-dialog",
74
  "yuanai-1.0-rhythm_poems",
75
+ "minimax-abab4-chat",
76
+ "minimax-abab5-chat",
77
  ]
78
 
79
  LOCAL_MODELS = [
readme/README_en.md CHANGED
@@ -44,6 +44,19 @@
44
  </p>
45
  </div>
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ## Usage Tips
48
 
49
  - To better control the ChatGPT, use System Prompt.
@@ -87,10 +100,6 @@ When you encounter problems, you should try manually pulling the latest changes
87
  ```
88
  pip install -r requirements.txt
89
  ```
90
- 3. Update Gradio
91
- ```
92
- pip install gradio --upgrade --force-reinstall
93
- ```
94
 
95
  Generally, you can solve most problems by following these steps.
96
 
 
44
  </p>
45
  </div>
46
 
47
+ ## Supported LLM Models
48
+ **LLM models via API**:
49
+ - [ChatGPT](https://chat.openai.com) ([GPT-4](https://openai.com/product/gpt-4))
50
+ - [Inspur Yuan 1.0](https://air.inspur.com/home)
51
+ - [MiniMax](https://api.minimax.chat/)
52
+ - [XMChat](https://github.com/MILVLG/xmchat)
53
+
54
+ **LLM models via local deployment**:
55
+ - [ChatGLM](https://github.com/THUDM/ChatGLM-6B)
56
+ - [LLaMA](https://github.com/facebookresearch/llama)
57
+ - [StableLM](https://github.com/Stability-AI/StableLM)
58
+ - [MOSS](https://github.com/OpenLMLab/MOSS)
59
+
60
  ## Usage Tips
61
 
62
  - To better control the ChatGPT, use System Prompt.
 
100
  ```
101
  pip install -r requirements.txt
102
  ```
 
 
 
 
103
 
104
  Generally, you can solve most problems by following these steps.
105
 
readme/README_ja.md CHANGED
@@ -44,6 +44,19 @@
44
  </p>
45
  </div>
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ## 使う上でのTips
48
 
49
  - ChatGPTをより適切に制御するために、システムプロンプトを使用できます。
@@ -86,10 +99,6 @@ python ChuanhuChatbot.py
86
  ```
87
  pip install -r requirements.txt
88
  ```
89
- 3. Gradioを更新
90
- ```
91
- pip install gradio --upgrade --force-reinstall
92
- ```
93
 
94
  一般的に、以下の手順でほとんどの問題を解決することができます。
95
 
 
44
  </p>
45
  </div>
46
 
47
+ ## サポートされている大規模言語モデル
48
+ **APIを通じてアクセス可能な大規模言語モデル**:
49
+ - [ChatGPT](https://chat.openai.com) ([GPT-4](https://openai.com/product/gpt-4))
50
+ - [Inspur Yuan 1.0](https://air.inspur.com/home)
51
+ - [MiniMax](https://api.minimax.chat/)
52
+ - [XMChat](https://github.com/MILVLG/xmchat)
53
+
54
+ **ローカルに展開された大規模言語モデル**:
55
+ - [ChatGLM](https://github.com/THUDM/ChatGLM-6B)
56
+ - [LLaMA](https://github.com/facebookresearch/llama)
57
+ - [StableLM](https://github.com/Stability-AI/StableLM)
58
+ - [MOSS](https://github.com/OpenLMLab/MOSS)
59
+
60
  ## 使う上でのTips
61
 
62
  - ChatGPTをより適切に制御するために、システムプロンプトを使用できます。
 
99
  ```
100
  pip install -r requirements.txt
101
  ```
 
 
 
 
102
 
103
  一般的に、以下の手順でほとんどの問題を解決することができます。
104
 
requirements.txt CHANGED
@@ -8,7 +8,7 @@ tqdm
8
  colorama
9
  duckduckgo_search==2.9.5
10
  Pygments
11
- langchain==0.0.170
12
  markdown
13
  PyPDF2
14
  pdfplumber
 
8
  colorama
9
  duckduckgo_search==2.9.5
10
  Pygments
11
+ langchain==0.0.142
12
  markdown
13
  PyPDF2
14
  pdfplumber