repo_name
stringlengths
8
102
language
stringclasses
1 value
created_at
timestamp[ns]
license
stringclasses
22 values
description
stringlengths
4
345
stars
int64
2
4.75k
forks
int64
0
554
url
stringlengths
27
121
repo_code
list
zhile-io/pandora
python
2023-09-12T12:38:42
GNU General Public License v2.0
潘多拉,一个让你呼吸顺畅的ChatGPT。Pandora, a ChatGPT client that lets you breathe freely.
3,316
554
https://github.com/zhile-io/pandora
[ { "code": "# -*- coding: utf-8 -*-\n\nfrom setuptools import setup, find_packages\n\nfrom src.pandora import __version__\n\nwith open('README.md', 'r', encoding='utf-8') as f:\n long_description = f.read()\n\nwith open('requirements.txt', 'r', encoding='utf-8') as f:\n requirements = f.read().split('\\n')\n\nwith open('requirements_api.txt', 'r', encoding='utf-8') as f:\n requirements_api = f.read().split('\\n')\n\nsetup(\n name='Pandora-ChatGPT',\n version=__version__,\n python_requires='>=3.7',\n author='Neo Peng',\n author_email='[email protected]',\n keywords='OpenAI ChatGPT ChatGPT-Plus gpt-3.5-turbo gpt-3.5-turbo-0301',\n description='A command-line interface to ChatGPT',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/zhile-io/pandora',\n packages=find_packages('src'),\n package_dir={'pandora': 'src/pandora'},\n include_package_data=True,\n install_requires=requirements,\n extras_require={\n 'api': requirements_api,\n 'cloud': ['pandora-cloud~=0.6.1'],\n },\n entry_points={\n 'console_scripts': [\n 'pandora = pandora.launcher:run',\n 'pandora-cloud = pandora.cloud_launcher:run',\n ]\n },\n project_urls={\n 'Source': 'https://github.com/zhile-io/pandora',\n 'Tracker': 'https://github.com/zhile-io/pandora/issues',\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'Environment :: Console',\n 'Environment :: Web Environment',\n\n 'Framework :: Flask',\n\n 'Intended Audience :: Developers',\n 'Intended Audience :: End Users/Desktop',\n\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n\n 'Natural Language :: English',\n 'Natural Language :: Chinese (Simplified)',\n\n 'Operating System :: MacOS',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: SQL',\n 'Programming Language :: JavaScript',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Programming Language :: Python :: 3.10',\n 'Programming Language :: Python :: 3.11',\n\n 'Topic :: Communications :: Chat',\n 'Topic :: Internet :: WWW/HTTP',\n ],\n)\n", "path": "setup.py", "repo_name": "zhile-io/pandora", "size": 2389 }, { "code": "# -*- coding: utf-8 -*-\n\nfrom pandora import launcher\n\nif __name__ == '__main__':\n launcher.run()\n", "path": "src/pandora/__main__.py", "repo_name": "zhile-io/pandora", "size": 101 }, { "code": "# -*- coding: utf-8 -*-\n\nimport re\nimport uuid\n\nimport pyperclip\nfrom rich.prompt import Prompt, Confirm\n\nfrom .. import __version__\nfrom ..openai.utils import Console\n\n\nclass ChatPrompt:\n def __init__(self, prompt: str = None, parent_id=None, message_id=None):\n self.prompt = prompt\n self.parent_id = parent_id or self.gen_message_id()\n self.message_id = message_id or self.gen_message_id()\n\n @staticmethod\n def gen_message_id():\n return str(uuid.uuid4())\n\n\nclass State:\n def __init__(self, title=None, conversation_id=None, model_slug=None, user_prompt=ChatPrompt(),\n chatgpt_prompt=ChatPrompt()):\n self.title = title\n self.conversation_id = conversation_id\n self.model_slug = model_slug\n self.user_prompt = user_prompt\n self.chatgpt_prompt = chatgpt_prompt\n self.user_prompts = []\n self.edit_index = None\n\n\nclass ChatBot:\n def __init__(self, chatgpt):\n self.chatgpt = chatgpt\n self.token_key = None\n self.state = None\n\n def run(self):\n self.token_key = self.__choice_token_key()\n\n conversation_base = self.__choice_conversation()\n if conversation_base:\n self.__load_conversation(conversation_base['id'])\n else:\n self.__new_conversation()\n\n self.__talk_loop()\n\n def __talk_loop(self):\n while True:\n Console.info_b('You{}:'.format(' (edit)' if self.state and self.state.edit_index else ''))\n\n prompt = self.__get_input()\n if not prompt:\n continue\n\n if '/' == prompt[0]:\n self.__process_command(prompt)\n continue\n\n self.__talk(prompt)\n\n @staticmethod\n def __get_input():\n lines = []\n while True:\n line = input()\n\n if not line:\n break\n\n if '/' == line[0]:\n return line\n\n lines.append(line)\n\n return '\\n'.join(lines)\n\n def __process_command(self, command):\n command = command.strip().lower()\n\n if '/quit' == command or '/exit' == command or '/bye' == command:\n raise KeyboardInterrupt\n elif '/del' == command or '/delete' == command or '/remove' == command:\n self.__del_conversation(self.state)\n elif '/title' == command or '/set_title' == command or '/set-title' == command:\n self.__set_conversation_title(self.state)\n elif '/select' == command:\n self.run()\n elif '/refresh' == command or '/reload' == command:\n self.__load_conversation(self.state.conversation_id)\n elif '/new' == command:\n self.__new_conversation()\n self.__talk_loop()\n elif '/regen' == command or '/regenerate' == command:\n self.__regenerate_reply(self.state)\n elif '/goon' == command or '/continue' == command:\n self.__continue(self.state)\n elif '/edit' == command or '/modify' == command:\n self.__edit_choice()\n elif '/token' == command:\n self.__print_access_token()\n elif '/cls' == command or '/clear' == command:\n self.__clear_screen()\n elif '/copy' == command or '/cp' == command:\n self.__copy_text()\n elif '/copy_code' == command or \"/cp_code\" == command:\n self.__copy_code()\n elif '/ver' == command or '/version' == command:\n self.__print_version()\n else:\n self.__print_usage()\n\n @staticmethod\n def __print_usage():\n Console.info_b('\\n#### Command list:')\n print('/?\\t\\tShow this help message.')\n print('/title\\t\\tSet the current conversation\\'s title.')\n print('/select\\t\\tChoice a different conversation.')\n print('/reload\\t\\tReload the current conversation.')\n print('/regen\\t\\tRegenerate response.')\n print('/continue\\t\\tContinue generating.')\n print('/edit\\t\\tEdit one of your previous prompt.')\n print('/new\\t\\tStart a new conversation.')\n print('/del\\t\\tDelete the current conversation.')\n print('/token\\t\\tPrint your access token.')\n print('/copy\\t\\tCopy the last response to clipboard.')\n print('/copy_code\\t\\tCopy code from last response.')\n print('/clear\\t\\tClear your screen.')\n print('/version\\tPrint the version of Pandora.')\n print('/exit\\t\\tExit Pandora.')\n print()\n\n def __edit_choice(self):\n if not self.state.user_prompts:\n return\n\n choices = []\n pattern = re.compile(r'\\s+')\n Console.info_b('Choice your prompt to edit:')\n for idx, item in enumerate(self.state.user_prompts):\n number = str(idx + 1)\n choices.append(number)\n\n preview_prompt = re.sub(pattern, ' ', item.prompt)\n if len(preview_prompt) > 40:\n preview_prompt = '{}...'.format(preview_prompt[0:40])\n\n Console.info(' {}.\\t{}'.format(number, preview_prompt))\n\n choices.append('c')\n Console.warn(' c.\\t** Cancel')\n\n default_choice = None if len(choices) > 2 else '1'\n while True:\n choice = Prompt.ask('Your choice', choices=choices, show_choices=False, default=default_choice)\n if 'c' == choice:\n return\n\n self.state.edit_index = int(choice)\n return\n\n def __print_access_token(self):\n Console.warn_b('\\n#### Your access token (keep it private)')\n Console.warn(self.chatgpt.get_access_token(token_key=self.token_key))\n print()\n\n def __clear_screen(self):\n Console.clear()\n\n if self.state:\n self.__print_conversation_title(self.state.title)\n\n @staticmethod\n def __print_version():\n Console.debug_bh('#### Version: {}'.format(__version__))\n print()\n\n def __new_conversation(self):\n self.state = State(model_slug=self.__choice_model()['slug'])\n\n self.state.title = 'New Chat'\n self.__print_conversation_title(self.state.title)\n\n @staticmethod\n def __print_conversation_title(title: str):\n Console.info_bh('==================== {} ===================='.format(title))\n Console.debug_h('Double enter to send. Type /? for help.')\n\n def __set_conversation_title(self, state: State):\n if not state.conversation_id:\n Console.error('#### Conversation has not been created.')\n return\n\n new_title = Prompt.ask('New title')\n if len(new_title) > 64:\n Console.error('#### Title too long.')\n return\n\n if self.chatgpt.set_conversation_title(state.conversation_id, new_title, token=self.token_key):\n state.title = new_title\n Console.debug('#### Set title success.')\n else:\n Console.error('#### Set title failed.')\n\n def __clear_conversations(self):\n if not Confirm.ask('Are you sure?', default=False):\n return\n\n if self.chatgpt.clear_conversations(token=self.token_key):\n self.run()\n else:\n Console.error('#### Clear conversations failed.')\n\n def __del_conversation(self, state: State):\n if not state.conversation_id:\n Console.error('#### Conversation has not been created.')\n return\n\n if not Confirm.ask('Are you sure?', default=False):\n return\n\n if self.chatgpt.del_conversation(state.conversation_id, token=self.token_key):\n self.run()\n else:\n Console.error('#### Delete conversation failed.')\n\n def __load_conversation(self, conversation_id):\n if not conversation_id:\n return\n\n self.state = State(conversation_id=conversation_id)\n\n nodes = []\n result = self.chatgpt.get_conversation(conversation_id, token=self.token_key)\n current_node_id = result['current_node']\n\n while True:\n node = result['mapping'][current_node_id]\n if not node.get('parent'):\n break\n\n nodes.insert(0, node)\n current_node_id = node['parent']\n\n self.state.title = result['title']\n self.__print_conversation_title(self.state.title)\n\n merge = False\n for node in nodes:\n message = node['message']\n if 'model_slug' in message['metadata']:\n self.state.model_slug = message['metadata']['model_slug']\n\n role = message['author']['role'] if 'author' in message else message['role']\n\n if 'user' == role:\n prompt = self.state.user_prompt\n self.state.user_prompts.append(ChatPrompt(message['content']['parts'][0], parent_id=node['parent']))\n\n Console.info_b('You:')\n Console.info(message['content']['parts'][0])\n elif 'assistant' == role:\n prompt = self.state.chatgpt_prompt\n\n if not merge:\n Console.success_b('ChatGPT:')\n Console.success(message['content']['parts'][0])\n\n merge = 'end_turn' in message and message['end_turn'] is None\n else:\n continue\n\n prompt.prompt = message['content']['parts'][0]\n prompt.parent_id = node['parent']\n prompt.message_id = node['id']\n\n if not merge:\n print()\n\n def __talk(self, prompt):\n Console.success_b('ChatGPT:')\n\n first_prompt = not self.state.conversation_id\n\n if self.state.edit_index:\n idx = self.state.edit_index - 1\n user_prompt = self.state.user_prompts[idx]\n self.state.user_prompt = ChatPrompt(prompt, parent_id=user_prompt.parent_id)\n self.state.user_prompts = self.state.user_prompts[0:idx]\n\n self.state.edit_index = None\n else:\n self.state.user_prompt = ChatPrompt(prompt, parent_id=self.state.chatgpt_prompt.message_id)\n\n status, _, generator = self.chatgpt.talk(prompt, self.state.model_slug, self.state.user_prompt.message_id,\n self.state.user_prompt.parent_id, self.state.conversation_id,\n token=self.token_key)\n self.__print_reply(status, generator)\n\n self.state.user_prompts.append(self.state.user_prompt)\n\n if first_prompt:\n new_title = self.chatgpt.gen_conversation_title(self.state.conversation_id, self.state.model_slug,\n self.state.chatgpt_prompt.message_id, token=self.token_key)\n self.state.title = new_title\n Console.debug_bh('#### Title generated: ' + new_title)\n\n def __regenerate_reply(self, state):\n if not state.conversation_id:\n Console.error('#### Conversation has not been created.')\n return\n\n status, _, generator = self.chatgpt.regenerate_reply(state.user_prompt.prompt, state.model_slug,\n state.conversation_id, state.user_prompt.message_id,\n state.user_prompt.parent_id, token=self.token_key)\n print()\n Console.success_b('ChatGPT:')\n self.__print_reply(status, generator)\n\n def __continue(self, state):\n if not state.conversation_id:\n Console.error('#### Conversation has not been created.')\n return\n\n status, _, generator = self.chatgpt.goon(state.model_slug, state.chatgpt_prompt.message_id,\n state.conversation_id, token=self.token_key)\n print()\n Console.success_b('ChatGPT:')\n self.__print_reply(status, generator)\n\n def __print_reply(self, status, generator):\n if 200 != status:\n raise Exception(status, next(generator))\n\n p = 0\n for result in generator:\n if result['error']:\n raise Exception(result['error'])\n\n if not result['message']:\n raise Exception('miss message property.')\n\n text = None\n message = result['message']\n if 'assistant' == message['author']['role']:\n text = message['content']['parts'][0][p:]\n p += len(text)\n\n self.state.conversation_id = result['conversation_id']\n self.state.chatgpt_prompt.prompt = message['content']['parts'][0]\n self.state.chatgpt_prompt.parent_id = self.state.user_prompt.message_id\n self.state.chatgpt_prompt.message_id = message['id']\n\n if 'system' == message['author']['role']:\n self.state.user_prompt.parent_id = message['id']\n\n if text:\n Console.success(text, end='')\n\n print('\\n')\n\n def __choice_conversation(self, page=1, page_size=20):\n conversations = self.chatgpt.list_conversations((page - 1) * page_size, page_size, token=self.token_key)\n if not conversations['total']:\n return None\n\n choices = ['c', 'r', 'dd']\n items = conversations['items']\n first_page = 0 == conversations['offset']\n last_page = (conversations['offset'] + conversations['limit']) >= conversations['total']\n\n Console.info_b('Choice conversation (Page {}):'.format(page))\n for idx, item in enumerate(items):\n number = str(idx + 1)\n choices.append(number)\n choices.append('t' + number)\n choices.append('d' + number)\n Console.info(' {}.\\t{}'.format(number, item['title'].replace('\\n', ' ')))\n\n if not last_page:\n choices.append('n')\n Console.warn(' n.\\t>> Next page')\n\n if not first_page:\n choices.append('p')\n Console.warn(' p.\\t<< Previous page')\n\n Console.warn(' t?.\\tSet title for the conversation, eg: t1')\n Console.warn(' d?.\\tDelete the conversation, eg: d1')\n Console.warn(' dd.\\t!! Clear all conversations')\n Console.warn(' r.\\tRefresh conversation list')\n\n if len(self.chatgpt.list_token_keys()) > 1:\n choices.append('k')\n Console.warn(' k.\\tChoice access token')\n\n Console.warn(' c.\\t** Start new chat')\n\n while True:\n choice = Prompt.ask('Your choice', choices=choices, show_choices=False)\n if 'c' == choice:\n return None\n\n if 'k' == choice:\n self.run()\n return\n\n if 'r' == choice:\n return self.__choice_conversation(page, page_size)\n\n if 'n' == choice:\n return self.__choice_conversation(page + 1, page_size)\n\n if 'p' == choice:\n return self.__choice_conversation(page - 1, page_size)\n\n if 'dd' == choice:\n self.__clear_conversations()\n continue\n\n if 't' == choice[0]:\n self.__set_conversation_title(State(conversation_id=items[int(choice[1:]) - 1]['id']))\n return self.__choice_conversation(page, page_size)\n\n if 'd' == choice[0]:\n self.__del_conversation(State(conversation_id=items[int(choice[1:]) - 1]['id']))\n continue\n\n return items[int(choice) - 1]\n\n def __choice_token_key(self):\n tokens = self.chatgpt.list_token_keys()\n\n size = len(tokens)\n if 1 == size:\n return None\n\n choices = ['r']\n Console.info_b('Choice access token:')\n for idx, item in enumerate(tokens):\n number = str(idx + 1)\n choices.append(number)\n Console.info(' {}.\\t{}'.format(number, item))\n\n while True:\n choice = Prompt.ask('Your choice', choices=choices, show_choices=False)\n\n return tokens[int(choice) - 1]\n\n def __choice_model(self):\n models = self.chatgpt.list_models(token=self.token_key)\n\n size = len(models)\n if 1 == size:\n return models[0]\n\n choices = ['r']\n Console.info_b('Choice model:')\n for idx, item in enumerate(models):\n number = str(idx + 1)\n choices.append(number)\n Console.info(' {}.\\t{} - {} - {}'.format(number, item['title'], item['description'],\n '|'.join(item['tags'])))\n\n Console.warn(' r.\\tRefresh model list')\n\n while True:\n choice = Prompt.ask('Your choice', choices=choices, show_choices=False)\n if 'r' == choice:\n return self.__choice_model()\n\n return models[int(choice) - 1]\n\n def __copy_text(self):\n pyperclip.copy(self.state.chatgpt_prompt.prompt)\n Console.info(\"已将上一次返回结果复制到剪切板。\")\n pass\n\n def __copy_code(self):\n text = self.state.chatgpt_prompt.prompt\n pattern = re.compile(r'```.*\\s([\\s\\S]*?)\\s```')\n result = re.findall(pattern, text)\n if len(result) == 0:\n Console.info(\"未找到代码。\")\n return\n else:\n code = '\\n=======================================================\\n'.join(result)\n pyperclip.copy(code)\n Console.info(\"已将上一次生成的代码复制到剪切板。\")\n pass\n", "path": "src/pandora/bots/legacy.py", "repo_name": "zhile-io/pandora", "size": 17441 }, { "code": "# -*- coding: utf-8 -*-\n\nimport logging\nfrom datetime import timedelta\nfrom os.path import join, abspath, dirname\n\nfrom flask import Flask, jsonify, make_response, request, Response, render_template\nfrom flask_cors import CORS\nfrom waitress import serve\nfrom werkzeug.exceptions import default_exceptions\nfrom werkzeug.middleware.proxy_fix import ProxyFix\nfrom werkzeug.serving import WSGIRequestHandler\n\nfrom .. import __version__\nfrom ..exts.hooks import hook_logging\nfrom ..openai.api import API\n\n\nclass ChatBot:\n __default_ip = '127.0.0.1'\n __default_port = 8008\n\n def __init__(self, chatgpt, debug=False, sentry=False):\n self.chatgpt = chatgpt\n self.debug = debug\n self.sentry = sentry\n self.log_level = logging.DEBUG if debug else logging.WARN\n\n hook_logging(level=self.log_level, format='[%(asctime)s] %(levelname)s in %(module)s: %(message)s')\n self.logger = logging.getLogger('waitress')\n\n def run(self, bind_str, threads=8):\n host, port = self.__parse_bind(bind_str)\n\n resource_path = abspath(join(dirname(__file__), '..', 'flask'))\n app = Flask(__name__, static_url_path='',\n static_folder=join(resource_path, 'static'),\n template_folder=join(resource_path, 'templates'))\n app.wsgi_app = ProxyFix(app.wsgi_app, x_port=1)\n app.after_request(self.__after_request)\n\n CORS(app, resources={r'/api/*': {'supports_credentials': True, 'expose_headers': [\n 'Content-Type',\n 'Authorization',\n 'X-Requested-With',\n 'Accept',\n 'Origin',\n 'Access-Control-Request-Method',\n 'Access-Control-Request-Headers',\n 'Content-Disposition',\n ], 'max_age': 600}})\n\n for ex in default_exceptions:\n app.register_error_handler(ex, self.__handle_error)\n\n app.route('/api/models')(self.list_models)\n app.route('/api/conversations')(self.list_conversations)\n app.route('/api/conversations', methods=['DELETE'])(self.clear_conversations)\n app.route('/api/conversation/<conversation_id>')(self.get_conversation)\n app.route('/api/conversation/<conversation_id>', methods=['DELETE'])(self.del_conversation)\n app.route('/api/conversation/<conversation_id>', methods=['PATCH'])(self.set_conversation_title)\n app.route('/api/conversation/gen_title/<conversation_id>', methods=['POST'])(self.gen_conversation_title)\n app.route('/api/conversation/talk', methods=['POST'])(self.talk)\n app.route('/api/conversation/regenerate', methods=['POST'])(self.regenerate)\n app.route('/api/conversation/goon', methods=['POST'])(self.goon)\n\n app.route('/api/auth/session')(self.session)\n app.route('/api/accounts/check')(self.check)\n app.route('/_next/data/olf4sv64FWIcQ_zCGl90t/chat.json')(self.chat_info)\n\n app.route('/')(self.chat)\n app.route('/chat')(self.chat)\n app.route('/chat/<conversation_id>')(self.chat)\n\n if not self.debug:\n self.logger.warning('Serving on http://{}:{}'.format(host, port))\n\n WSGIRequestHandler.protocol_version = 'HTTP/1.1'\n serve(app, host=host, port=port, ident=None, threads=threads)\n\n @staticmethod\n def __after_request(resp):\n resp.headers['X-Server'] = 'pandora/{}'.format(__version__)\n\n return resp\n\n def __parse_bind(self, bind_str):\n sections = bind_str.split(':', 2)\n if len(sections) < 2:\n try:\n port = int(sections[0])\n return self.__default_ip, port\n except ValueError:\n return sections[0], self.__default_port\n\n return sections[0], int(sections[1])\n\n def __handle_error(self, e):\n self.logger.error(e)\n\n return make_response(jsonify({\n 'code': e.code,\n 'message': str(e.original_exception if self.debug and hasattr(e, 'original_exception') else e.name)\n }), 500)\n\n @staticmethod\n def __set_cookie(resp, token_key, max_age):\n resp.set_cookie('token-key', token_key, max_age=max_age, path='/', domain=None, httponly=True, samesite='Lax')\n\n @staticmethod\n def __get_token_key():\n return request.headers.get('X-Use-Token', request.cookies.get('token-key'))\n\n def chat(self, conversation_id=None):\n query = {'chatId': [conversation_id]} if conversation_id else {}\n\n token_key = request.args.get('token')\n rendered = render_template('chat.html', pandora_base=request.url_root.strip('/'), query=query)\n resp = make_response(rendered)\n\n if token_key:\n self.__set_cookie(resp, token_key, timedelta(days=30))\n\n return resp\n\n @staticmethod\n def session():\n ret = {\n 'user': {\n 'id': 'user-000000000000000000000000',\n 'name': '[email protected]',\n 'email': '[email protected]',\n 'image': None,\n 'picture': None,\n 'groups': []\n },\n 'expires': '2089-08-08T23:59:59.999Z',\n 'accessToken': 'secret',\n }\n\n return jsonify(ret)\n\n @staticmethod\n def chat_info():\n ret = {\n 'pageProps': {\n 'user': {\n 'id': 'user-000000000000000000000000',\n 'name': '[email protected]',\n 'email': '[email protected]',\n 'image': None,\n 'picture': None,\n 'groups': []\n },\n 'serviceStatus': {},\n 'userCountry': 'US',\n 'geoOk': True,\n 'serviceAnnouncement': {\n 'paid': {},\n 'public': {}\n },\n 'isUserInCanPayGroup': True\n },\n '__N_SSP': True\n }\n\n return jsonify(ret)\n\n @staticmethod\n def check():\n ret = {\n 'account_plan': {\n 'is_paid_subscription_active': True,\n 'subscription_plan': 'chatgptplusplan',\n 'account_user_role': 'account-owner',\n 'was_paid_customer': True,\n 'has_customer_object': True,\n 'subscription_expires_at_timestamp': 3774355199\n },\n 'user_country': 'US',\n 'features': [\n 'model_switcher',\n 'dfw_message_feedback',\n 'dfw_inline_message_regen_comparison',\n 'model_preview',\n 'system_message',\n 'can_continue',\n ],\n }\n\n return jsonify(ret)\n\n def list_models(self):\n return self.__proxy_result(self.chatgpt.list_models(True, self.__get_token_key()))\n\n def list_conversations(self):\n offset = request.args.get('offset', '0')\n limit = request.args.get('limit', '20')\n\n return self.__proxy_result(self.chatgpt.list_conversations(offset, limit, True, self.__get_token_key()))\n\n def get_conversation(self, conversation_id):\n return self.__proxy_result(self.chatgpt.get_conversation(conversation_id, True, self.__get_token_key()))\n\n def del_conversation(self, conversation_id):\n return self.__proxy_result(self.chatgpt.del_conversation(conversation_id, True, self.__get_token_key()))\n\n def clear_conversations(self):\n return self.__proxy_result(self.chatgpt.clear_conversations(True, self.__get_token_key()))\n\n def set_conversation_title(self, conversation_id):\n title = request.json['title']\n\n return self.__proxy_result(\n self.chatgpt.set_conversation_title(conversation_id, title, True, self.__get_token_key()))\n\n def gen_conversation_title(self, conversation_id):\n payload = request.json\n model = payload['model']\n message_id = payload['message_id']\n\n return self.__proxy_result(\n self.chatgpt.gen_conversation_title(conversation_id, model, message_id, True, self.__get_token_key()))\n\n def talk(self):\n payload = request.json\n prompt = payload['prompt']\n model = payload['model']\n message_id = payload['message_id']\n parent_message_id = payload['parent_message_id']\n conversation_id = payload.get('conversation_id')\n stream = payload.get('stream', True)\n\n return self.__process_stream(\n *self.chatgpt.talk(prompt, model, message_id, parent_message_id, conversation_id, stream,\n self.__get_token_key()), stream)\n\n def goon(self):\n payload = request.json\n model = payload['model']\n parent_message_id = payload['parent_message_id']\n conversation_id = payload.get('conversation_id')\n stream = payload.get('stream', True)\n\n return self.__process_stream(\n *self.chatgpt.goon(model, parent_message_id, conversation_id, stream, self.__get_token_key()), stream)\n\n def regenerate(self):\n payload = request.json\n\n conversation_id = payload.get('conversation_id')\n if not conversation_id:\n return self.talk()\n\n prompt = payload['prompt']\n model = payload['model']\n message_id = payload['message_id']\n parent_message_id = payload['parent_message_id']\n stream = payload.get('stream', True)\n\n return self.__process_stream(\n *self.chatgpt.regenerate_reply(prompt, model, conversation_id, message_id, parent_message_id, stream,\n self.__get_token_key()), stream)\n\n @staticmethod\n def __process_stream(status, headers, generator, stream):\n if stream:\n return Response(API.wrap_stream_out(generator, status), mimetype=headers['Content-Type'], status=status)\n\n last_json = None\n for json in generator:\n last_json = json\n\n return make_response(last_json, status)\n\n @staticmethod\n def __proxy_result(remote_resp):\n resp = make_response(remote_resp.text)\n resp.content_type = remote_resp.headers['Content-Type']\n resp.status_code = remote_resp.status_code\n\n return resp\n", "path": "src/pandora/bots/server.py", "repo_name": "zhile-io/pandora", "size": 10213 }, { "code": "# -*- coding: utf-8 -*-\n\nimport argparse\n\nfrom loguru import logger\n\nfrom . import __version__\nfrom .exts.hooks import hook_except_handle\nfrom .openai.utils import Console\n\n__show_verbose = False\n\n\ndef main():\n global __show_verbose\n\n Console.debug_b(\n '''\n Pandora-Cloud - A web interface to ChatGPT\n Github: https://github.com/zhile-io/pandora\n Version: {}, Mode: cloud, Engine: free\n '''.format(__version__)\n )\n\n parser = argparse.ArgumentParser()\n parser.add_argument(\n '-p',\n '--proxy',\n help='Use a proxy. Format: protocol://user:pass@ip:port',\n required=False,\n type=str,\n default=None,\n )\n parser.add_argument(\n '-s',\n '--server',\n help='Specific server bind. Format: ip:port, default: 127.0.0.1:8018',\n required=False,\n type=str,\n default='127.0.0.1:8018',\n )\n parser.add_argument(\n '--threads',\n help='Define the number of server workers, default: 4',\n required=False,\n type=int,\n default=4,\n )\n parser.add_argument(\n '-l',\n '--local',\n help='Login locally. Pay attention to the risk control of the login ip!',\n action='store_true',\n )\n parser.add_argument(\n '-v',\n '--verbose',\n help='Show exception traceback.',\n action='store_true',\n )\n args, _ = parser.parse_known_args()\n __show_verbose = args.verbose\n\n try:\n from pandora_cloud.server import ChatBot as CloudServer\n\n return CloudServer(args.proxy, args.verbose, login_local=args.local).run(args.server, args.threads)\n except (ImportError, ModuleNotFoundError):\n Console.error_bh('### You need `pip install Pandora-ChatGPT[cloud]` to support cloud mode.')\n\n\ndef run():\n hook_except_handle()\n\n try:\n main()\n except Exception as e:\n Console.error_bh('### Error occurred: ' + str(e))\n\n if __show_verbose:\n logger.exception('Exception occurred.')\n", "path": "src/pandora/cloud_launcher.py", "repo_name": "zhile-io/pandora", "size": 2048 }, { "code": "# -*- coding: utf-8 -*-\n\nfrom datetime import datetime, timedelta\nfrom os import getenv\nfrom os.path import join\n\nfrom appdirs import user_config_dir\n\nUSER_CONFIG_DIR = getenv('USER_CONFIG_DIR', user_config_dir('Pandora-ChatGPT'))\nDATABASE_URI = getenv('DATABASE_URI',\n 'sqlite:///{}?check_same_thread=False'.format(join(USER_CONFIG_DIR, 'pandora-chatgpt.db')))\n\n\ndef default_api_prefix():\n return 'https://ai-{}.fakeopen.com'.format((datetime.now() - timedelta(days=1)).strftime('%Y%m%d'))\n", "path": "src/pandora/exts/config.py", "repo_name": "zhile-io/pandora", "size": 515 }, { "code": "# -*- coding: utf-8 -*-\n\nimport logging\nimport sys\n\nfrom loguru import logger\n\n\ndef __exception_handle(e_type, e_value, e_traceback):\n if issubclass(e_type, KeyboardInterrupt):\n print('\\nBye...')\n sys.exit(0)\n\n sys.__excepthook__(e_type, e_value, e_traceback)\n\n\nclass __InterceptHandler(logging.Handler):\n def emit(self, record):\n try:\n level = logger.level(record.levelname).name\n except ValueError:\n level = record.levelno\n\n frame, depth = logging.currentframe(), 2\n while frame.f_code.co_filename == logging.__file__:\n frame = frame.f_back\n depth += 1\n\n logger.opt(depth=depth, exception=record.exc_info).log(\n level, record.getMessage()\n )\n\n\ndef hook_except_handle():\n sys.excepthook = __exception_handle\n\n\ndef hook_logging(**kwargs):\n logging.basicConfig(handlers=[__InterceptHandler()], **kwargs)\n", "path": "src/pandora/exts/hooks.py", "repo_name": "zhile-io/pandora", "size": 929 }, { "code": "# -*- coding: utf-8 -*-\n\nfrom jwt import decode\n\nfrom ..openai.utils import Console\n\n__public_key = b'-----BEGIN PUBLIC KEY-----\\n' \\\n b'MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27rOErDOPvPc3mOADYtQ\\n' \\\n b'BeenQm5NS5VHVaoO/Zmgsf1M0Wa/2WgLm9jX65Ru/K8Az2f4MOdpBxxLL686ZS+K\\n' \\\n b'7eJC/oOnrxCRzFYBqQbYo+JMeqNkrCn34yed4XkX4ttoHi7MwCEpVfb05Qf/ZAmN\\n' \\\n b'I1XjecFYTyZQFrd9LjkX6lr05zY6aM/+MCBNeBWp35pLLKhiq9AieB1wbDPcGnqx\\n' \\\n b'lXuU/bLgIyqUltqLkr9JHsf/2T4VrXXNyNeQyBq5wjYlRkpBQDDDNOcdGpx1buRr\\n' \\\n b'Z2hFyYuXDRrMcR6BQGC0ur9hI5obRYlchDFhlb0ElsJ2bshDDGRk5k3doHqbhj2I\\n' \\\n b'gQIDAQAB\\n' \\\n b'-----END PUBLIC KEY-----'\n\n\ndef check_access_token(access_token, api=False):\n if access_token.startswith('fk-'):\n return True\n\n if api and (access_token.startswith('sk-') or access_token.startswith('pk-')):\n return True\n\n payload = (decode(access_token, key=__public_key, algorithms='RS256', audience=[\n \"https://api.openai.com/v1\",\n \"https://openai.openai.auth0app.com/userinfo\"\n ], issuer='https://auth0.openai.com/'))\n\n if 'scope' not in payload:\n raise Exception('miss scope')\n\n scope = payload['scope']\n if 'model.read' not in scope or 'model.request' not in scope:\n raise Exception('invalid scope')\n\n if 'https://api.openai.com/auth' not in payload or 'https://api.openai.com/profile' not in payload:\n raise Exception('belonging to an unregistered user.')\n\n return payload\n\n\ndef check_access_token_out(access_token, api=False):\n try:\n return check_access_token(access_token, api)\n except Exception as e:\n Console.error('### Invalid access token: {}'.format(str(e)))\n return False\n", "path": "src/pandora/exts/token.py", "repo_name": "zhile-io/pandora", "size": 1793 }, { "code": "# -*- coding: utf-8 -*-\n\nimport argparse\nimport os\nfrom os import getenv\n\nfrom loguru import logger\nfrom rich.prompt import Prompt, Confirm\n\nfrom . import __version__\nfrom .bots.legacy import ChatBot as ChatBotLegacy\nfrom .bots.server import ChatBot as ChatBotServer\nfrom .exts.config import USER_CONFIG_DIR, default_api_prefix\nfrom .exts.hooks import hook_except_handle\nfrom .exts.token import check_access_token_out\nfrom .openai.api import ChatGPT\nfrom .openai.auth import Auth0\nfrom .openai.utils import Console\n\nif 'nt' == os.name:\n import pyreadline3 as readline\nelse:\n import readline\n\n readline.set_completer_delims('')\n readline.set_auto_history(False)\n\n__show_verbose = False\n\n\ndef read_access_token(token_file):\n with open(token_file, 'r') as f:\n return f.read().strip()\n\n\ndef save_access_token(access_token):\n token_file = os.path.join(USER_CONFIG_DIR, 'access_token.dat')\n\n if not os.path.exists(USER_CONFIG_DIR):\n os.makedirs(USER_CONFIG_DIR)\n\n with open(token_file, 'w') as f:\n f.write(access_token)\n\n if __show_verbose:\n Console.debug_b('\\nThe access token has been saved to the file:')\n Console.debug(token_file)\n print()\n\n\ndef confirm_access_token(token_file=None, silence=False, api=False):\n app_token_file = os.path.join(USER_CONFIG_DIR, 'access_token.dat')\n\n app_token_file_exists = os.path.isfile(app_token_file)\n if app_token_file_exists and __show_verbose:\n Console.debug_b('Found access token file: ', end='')\n Console.debug(app_token_file)\n\n if token_file:\n if not os.path.isfile(token_file):\n raise Exception('Error: {} is not a file.'.format(token_file))\n\n access_token = read_access_token(token_file)\n if os.path.isfile(app_token_file) and access_token == read_access_token(app_token_file):\n return access_token, False\n\n return access_token, True\n\n if app_token_file_exists:\n confirm = 'y' if silence else Prompt.ask('A saved access token has been detected. Do you want to use it?',\n choices=['y', 'n', 'del'], default='y')\n if 'y' == confirm:\n access_token = read_access_token(app_token_file)\n if not check_access_token_out(access_token, api):\n os.remove(app_token_file)\n return None, True\n\n return access_token, False\n elif 'del' == confirm:\n os.remove(app_token_file)\n\n return None, True\n\n\ndef parse_access_tokens(tokens_file, api=False):\n if not os.path.isfile(tokens_file):\n raise Exception('Error: {} is not a file.'.format(tokens_file))\n\n import json\n with open(tokens_file, 'r') as f:\n tokens = json.load(f)\n\n valid_tokens = {}\n for key, value in tokens.items():\n if not check_access_token_out(value, api=api):\n Console.error('### Access token id: {}'.format(key))\n continue\n valid_tokens[key] = value\n\n if not valid_tokens:\n Console.error('### No valid access tokens.')\n return None\n\n return valid_tokens\n\n\ndef main():\n global __show_verbose\n\n api_prefix = getenv('CHATGPT_API_PREFIX', default_api_prefix())\n\n Console.debug_b(\n '''\n Pandora - A command-line interface to ChatGPT\n Github: https://github.com/zhile-io/pandora\n Get access token: {}/auth\n Version: {}'''.format(api_prefix, __version__), end=''\n )\n\n parser = argparse.ArgumentParser()\n parser.add_argument(\n '-p',\n '--proxy',\n help='Use a proxy. Format: protocol://user:pass@ip:port',\n required=False,\n type=str,\n default=None,\n )\n parser.add_argument(\n '-t',\n '--token_file',\n help='Specify an access token file and login with your access token.',\n required=False,\n type=str,\n default=None,\n )\n parser.add_argument(\n '--tokens_file',\n help='Specify an access tokens json file.',\n required=False,\n type=str,\n default=None,\n )\n parser.add_argument(\n '-s',\n '--server',\n help='Start as a proxy server. Format: ip:port, default: 127.0.0.1:8008',\n required=False,\n type=str,\n default=None,\n action='store',\n nargs='?',\n const='127.0.0.1:8008',\n )\n parser.add_argument(\n '--threads',\n help='Define the number of server workers, default: 8',\n required=False,\n type=int,\n default=8,\n )\n parser.add_argument(\n '-a',\n '--api',\n help='Use gpt-3.5-turbo chat api. Note: OpenAI will bill you.',\n action='store_true',\n )\n parser.add_argument(\n '-l',\n '--local',\n help='Login locally. Pay attention to the risk control of the login ip!',\n action='store_true',\n )\n parser.add_argument(\n '-v',\n '--verbose',\n help='Show exception traceback.',\n action='store_true',\n )\n args, _ = parser.parse_known_args()\n __show_verbose = args.verbose\n\n Console.debug_b(''', Mode: {}, Engine: {}\n '''.format('server' if args.server else 'cli', 'turbo' if args.api else 'free'))\n\n if args.api:\n try:\n from .openai.token import gpt_num_tokens\n from .migrations.migrate import do_migrate\n\n do_migrate()\n except (ImportError, ModuleNotFoundError):\n Console.error_bh('### You need `pip install Pandora-ChatGPT[api]` to support API mode.')\n return\n\n access_tokens = parse_access_tokens(args.tokens_file, args.api) if args.tokens_file else None\n\n if not access_tokens:\n access_token, need_save = confirm_access_token(args.token_file, args.server, args.api)\n if not access_token:\n Console.info_b('Please enter your email and password to log in ChatGPT!')\n if not args.local:\n Console.warn('We login via {}'.format(api_prefix))\n\n email = getenv('OPENAI_EMAIL') or Prompt.ask(' Email')\n password = getenv('OPENAI_PASSWORD') or Prompt.ask(' Password', password=True)\n mfa = getenv('OPENAI_MFA_CODE') or Prompt.ask(' MFA Code(Optional if not set)')\n Console.warn('### Do login, please wait...')\n access_token = Auth0(email, password, args.proxy, mfa=mfa).auth(args.local)\n\n if not check_access_token_out(access_token, args.api):\n return\n\n if need_save:\n if args.server or Confirm.ask('Do you want to save your access token for the next login?', default=True):\n save_access_token(access_token)\n\n access_tokens = {'default': access_token}\n\n if args.api:\n from .turbo.chat import TurboGPT\n\n chatgpt = TurboGPT(access_tokens, args.proxy)\n else:\n chatgpt = ChatGPT(access_tokens, args.proxy)\n\n if args.server:\n return ChatBotServer(chatgpt, args.verbose).run(args.server, args.threads)\n\n ChatBotLegacy(chatgpt).run()\n\n\ndef run():\n hook_except_handle()\n\n try:\n main()\n except Exception as e:\n Console.error_bh('### Error occurred: ' + str(e))\n\n if __show_verbose:\n logger.exception('Exception occurred.')\n", "path": "src/pandora/launcher.py", "repo_name": "zhile-io/pandora", "size": 7290 }, { "code": "# -*- coding: utf-8 -*-\n\nfrom sqlalchemy import create_engine\nfrom sqlalchemy.orm import sessionmaker\n\nfrom ..exts.config import DATABASE_URI\n\nengine = create_engine(DATABASE_URI, echo=False)\n\nSession = sessionmaker(bind=engine)\n\nsession = Session()\n", "path": "src/pandora/migrations/database.py", "repo_name": "zhile-io/pandora", "size": 250 }, { "code": "# -*- coding: utf-8 -*-\n\nfrom os import makedirs, path\nfrom os.path import abspath, join, dirname\n\nfrom yoyo import get_backend\nfrom yoyo import read_migrations\n\nfrom ..exts.config import DATABASE_URI, USER_CONFIG_DIR\n\n\ndef do_migrate():\n if not path.exists(USER_CONFIG_DIR):\n makedirs(USER_CONFIG_DIR)\n\n url = 'mysql:{}'.format(DATABASE_URI[14:]) if 'mysql+pymysql:' == DATABASE_URI[0:14] else DATABASE_URI\n backend = get_backend(url)\n migrations = read_migrations(abspath(join(dirname(__file__), 'scripts')))\n\n with backend.lock():\n backend.apply_migrations(backend.to_apply(migrations))\n", "path": "src/pandora/migrations/migrate.py", "repo_name": "zhile-io/pandora", "size": 619 }, { "code": "# -*- coding: utf-8 -*-\n\nfrom datetime import datetime as dt\n\nfrom sqlalchemy import func, Column, Text, Integer\nfrom sqlalchemy.orm import DeclarativeBase\n\nfrom ..migrations.database import session\n\n\nclass Base(DeclarativeBase):\n pass\n\n\nclass ConversationOfficial(Base):\n __tablename__ = 'conversation_official'\n\n conversation_id = Column(Text, primary_key=True, autoincrement=False)\n title = Column(Text, nullable=False)\n create_time = Column(Integer, nullable=False)\n\n @staticmethod\n def get_list(offset, limit):\n total = session.query(func.count(ConversationOfficial.conversation_id)).scalar()\n return total, session.query(ConversationOfficial).order_by(ConversationOfficial.create_time.desc()).limit(\n limit).offset(offset).all()\n\n @staticmethod\n def get(conversation_id):\n return session.query(ConversationOfficial).get(conversation_id)\n\n def save(self):\n session.commit()\n return self\n\n def new(self):\n session.add(self)\n session.commit()\n\n return self\n\n @staticmethod\n def delete(conversation_id):\n session.query(ConversationOfficial).filter(ConversationOfficial.conversation_id == conversation_id).delete()\n session.commit()\n\n @staticmethod\n def clear():\n session.query(ConversationOfficial).delete()\n session.commit()\n\n @staticmethod\n def new_conversation(conversation_id, title=None):\n conv = ConversationOfficial.get(conversation_id)\n\n if not conv:\n conv = ConversationOfficial()\n conv.conversation_id = conversation_id\n conv.title = title or 'New chat'\n conv.create_time = dt.now().timestamp()\n conv.new()\n else:\n conv.title = title or 'New chat'\n conv.save()\n\n @staticmethod\n def wrap_conversation_list(offset, limit):\n total, items = ConversationOfficial.get_list(offset, limit)\n\n stripped = []\n for item in items:\n stripped.append({\n 'id': item.conversation_id,\n 'title': item.title,\n 'create_time': dt.utcfromtimestamp(item.create_time).isoformat(),\n })\n\n return {'items': stripped, 'total': total, 'limit': limit, 'offset': offset}\n\n\nclass ConversationInfo(Base):\n __tablename__ = 'conversation_info'\n\n conversation_id = Column(Text, primary_key=True, autoincrement=False)\n title = Column(Text, nullable=False)\n create_time = Column(Integer, nullable=False)\n current_node = Column(Text, nullable=True)\n\n @staticmethod\n def get_list(offset, limit):\n total = session.query(func.count(ConversationInfo.conversation_id)).scalar()\n return total, session.query(ConversationInfo).order_by(ConversationInfo.create_time.desc()).limit(\n limit).offset(offset).all()\n\n @staticmethod\n def get(conversation_id):\n return session.query(ConversationInfo).get(conversation_id)\n\n def new(self):\n session.add(self)\n session.commit()\n\n return self\n\n @staticmethod\n def delete(conversation_id):\n session.query(ConversationInfo).filter(ConversationInfo.conversation_id == conversation_id).delete()\n session.commit()\n\n @staticmethod\n def clear():\n session.query(ConversationInfo).delete()\n session.commit()\n\n\nclass PromptInfo(Base):\n __tablename__ = 'prompt_info'\n\n prompt_id = Column(Text, primary_key=True, autoincrement=False)\n conversation_id = Column(Text, primary_key=True, autoincrement=False)\n model = Column(Text, nullable=True)\n parent_id = Column(Text, nullable=True)\n role = Column(Text, nullable=True)\n content = Column(Text, nullable=True)\n create_time = Column(Integer, nullable=False)\n\n @staticmethod\n def list_by_conversation_id(conversation_id):\n return session.query(PromptInfo).filter(PromptInfo.conversation_id == conversation_id).all()\n\n def new(self):\n session.add(self)\n session.commit()\n\n return self\n\n @staticmethod\n def clear():\n session.query(PromptInfo).delete()\n session.commit()\n", "path": "src/pandora/migrations/models.py", "repo_name": "zhile-io/pandora", "size": 4153 }, { "code": "# -*- coding: utf-8 -*-\n\nimport asyncio\nimport json\nimport queue as block_queue\nimport threading\nfrom os import getenv\n\nimport httpx\nimport requests\nfrom certifi import where\n\nfrom .. import __version__\nfrom ..exts.config import default_api_prefix\n\n\nclass API:\n def __init__(self, proxy, ca_bundle):\n self.proxy = proxy\n self.ca_bundle = ca_bundle\n\n @staticmethod\n def wrap_stream_out(generator, status):\n if status != 200:\n for line in generator:\n yield json.dumps(line)\n\n return\n\n for line in generator:\n yield b'data: ' + json.dumps(line).encode('utf-8') + b'\\n\\n'\n\n yield b'data: [DONE]\\n\\n'\n\n async def __process_sse(self, resp):\n yield resp.status_code\n yield resp.headers\n\n if resp.status_code != 200:\n yield await self.__process_sse_except(resp)\n return\n\n async for utf8_line in resp.aiter_lines():\n if 'data: [DONE]' == utf8_line[0:12]:\n break\n\n if 'data: {\"message\":' == utf8_line[0:17] or 'data: {\"id\":' == utf8_line[0:12]:\n yield json.loads(utf8_line[6:])\n\n @staticmethod\n async def __process_sse_except(resp):\n result = b''\n async for line in resp.aiter_bytes():\n result += line\n\n return json.loads(result.decode('utf-8'))\n\n @staticmethod\n def __generate_wrap(queue, thread, event):\n while True:\n try:\n item = queue.get()\n if item is None:\n break\n\n yield item\n except BaseException as e:\n event.set()\n thread.join()\n\n if isinstance(e, GeneratorExit):\n raise e\n\n async def _do_request_sse(self, url, headers, data, queue, event):\n async with httpx.AsyncClient(verify=self.ca_bundle, proxies=self.proxy) as client:\n async with client.stream('POST', url, json=data, headers=headers, timeout=600) as resp:\n async for line in self.__process_sse(resp):\n queue.put(line)\n\n if event.is_set():\n await client.aclose()\n break\n\n queue.put(None)\n\n def _request_sse(self, url, headers, data):\n queue, e = block_queue.Queue(), threading.Event()\n t = threading.Thread(target=asyncio.run, args=(self._do_request_sse(url, headers, data, queue, e),))\n t.start()\n\n return queue.get(), queue.get(), self.__generate_wrap(queue, t, e)\n\n\nclass ChatGPT(API):\n def __init__(self, access_tokens: dict, proxy=None):\n self.access_tokens = access_tokens\n self.access_token_key_list = list(access_tokens)\n self.default_token_key = self.access_token_key_list[0]\n self.session = requests.Session()\n self.req_kwargs = {\n 'proxies': {\n 'http': proxy,\n 'https': proxy,\n } if proxy else None,\n 'verify': where(),\n 'timeout': 100,\n 'allow_redirects': False,\n }\n\n self.user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) ' \\\n 'Pandora/{} Safari/537.36'.format(__version__)\n\n super().__init__(proxy, self.req_kwargs['verify'])\n\n def __get_headers(self, token_key=None):\n return {\n 'Authorization': 'Bearer ' + self.get_access_token(token_key),\n 'User-Agent': self.user_agent,\n 'Content-Type': 'application/json',\n }\n\n @staticmethod\n def __get_api_prefix():\n return getenv('CHATGPT_API_PREFIX', default_api_prefix())\n\n def get_access_token(self, token_key=None):\n return self.access_tokens[token_key or self.default_token_key]\n\n def list_token_keys(self):\n return self.access_token_key_list\n\n def list_models(self, raw=False, token=None):\n url = '{}/api/models'.format(self.__get_api_prefix())\n resp = self.session.get(url=url, headers=self.__get_headers(token), **self.req_kwargs)\n\n if raw:\n return resp\n\n if resp.status_code != 200:\n raise Exception('list models failed: ' + self.__get_error(resp))\n\n result = resp.json()\n if 'models' not in result:\n raise Exception('list models failed: ' + resp.text)\n\n return result['models']\n\n def list_conversations(self, offset, limit, raw=False, token=None):\n url = '{}/api/conversations?offset={}&limit={}'.format(self.__get_api_prefix(), offset, limit)\n resp = self.session.get(url=url, headers=self.__get_headers(token), **self.req_kwargs)\n\n if raw:\n return resp\n\n if resp.status_code != 200:\n raise Exception('list conversations failed: ' + self.__get_error(resp))\n\n return resp.json()\n\n def get_conversation(self, conversation_id, raw=False, token=None):\n url = '{}/api/conversation/{}'.format(self.__get_api_prefix(), conversation_id)\n resp = self.session.get(url=url, headers=self.__get_headers(token), **self.req_kwargs)\n\n if raw:\n return resp\n\n if resp.status_code != 200:\n raise Exception('get conversation failed: ' + self.__get_error(resp))\n\n return resp.json()\n\n def clear_conversations(self, raw=False, token=None):\n data = {\n 'is_visible': False,\n }\n\n url = '{}/api/conversations'.format(self.__get_api_prefix())\n resp = self.session.patch(url=url, headers=self.__get_headers(token), json=data, **self.req_kwargs)\n\n if raw:\n return resp\n\n if resp.status_code != 200:\n raise Exception('clear conversations failed: ' + self.__get_error(resp))\n\n result = resp.json()\n if 'success' not in result:\n raise Exception('clear conversations failed: ' + resp.text)\n\n return result['success']\n\n def del_conversation(self, conversation_id, raw=False, token=None):\n data = {\n 'is_visible': False,\n }\n\n return self.__update_conversation(conversation_id, data, raw, token)\n\n def gen_conversation_title(self, conversation_id, model, message_id, raw=False, token=None):\n url = '{}/api/conversation/gen_title/{}'.format(self.__get_api_prefix(), conversation_id)\n data = {\n 'model': model,\n 'message_id': message_id,\n }\n resp = self.session.post(url=url, headers=self.__get_headers(token), json=data, **self.req_kwargs)\n\n if raw:\n return resp\n\n if resp.status_code != 200:\n raise Exception('gen title failed: ' + self.__get_error(resp))\n\n result = resp.json()\n if 'title' not in result:\n raise Exception('gen title failed: ' + resp.text)\n\n return result['title']\n\n def set_conversation_title(self, conversation_id, title, raw=False, token=None):\n data = {\n 'title': title,\n }\n\n return self.__update_conversation(conversation_id, data, raw, token)\n\n def talk(self, prompt, model, message_id, parent_message_id, conversation_id=None, stream=True, token=None):\n data = {\n 'action': 'next',\n 'messages': [\n {\n 'id': message_id,\n 'role': 'user',\n 'author': {\n 'role': 'user',\n },\n 'content': {\n 'content_type': 'text',\n 'parts': [prompt],\n },\n }\n ],\n 'model': model,\n 'parent_message_id': parent_message_id,\n }\n\n if conversation_id:\n data['conversation_id'] = conversation_id\n\n return self.__request_conversation(data, token)\n\n def goon(self, model, parent_message_id, conversation_id, stream=True, token=None):\n data = {\n 'action': 'continue',\n 'conversation_id': conversation_id,\n 'model': model,\n 'parent_message_id': parent_message_id,\n }\n\n return self.__request_conversation(data, token)\n\n def regenerate_reply(self, prompt, model, conversation_id, message_id, parent_message_id, stream=True, token=None):\n data = {\n 'action': 'variant',\n 'messages': [\n {\n 'id': message_id,\n 'role': 'user',\n 'author': {\n 'role': 'user',\n },\n 'content': {\n 'content_type': 'text',\n 'parts': [prompt],\n },\n }\n ],\n 'model': model,\n 'conversation_id': conversation_id,\n 'parent_message_id': parent_message_id,\n }\n\n return self.__request_conversation(data, token)\n\n def __request_conversation(self, data, token=None):\n url = '{}/api/conversation'.format(self.__get_api_prefix())\n headers = {**self.session.headers, **self.__get_headers(token), 'Accept': 'text/event-stream'}\n\n return self._request_sse(url, headers, data)\n\n def __update_conversation(self, conversation_id, data, raw=False, token=None):\n url = '{}/api/conversation/{}'.format(self.__get_api_prefix(), conversation_id)\n resp = self.session.patch(url=url, headers=self.__get_headers(token), json=data, **self.req_kwargs)\n\n if raw:\n return resp\n\n if resp.status_code != 200:\n raise Exception('update conversation failed: ' + self.__get_error(resp))\n\n result = resp.json()\n if 'success' not in result:\n raise Exception('update conversation failed: ' + resp.text)\n\n return result['success']\n\n @staticmethod\n def __get_error(resp):\n try:\n return str(resp.json()['detail'])\n except:\n return resp.text\n\n\nclass ChatCompletion(API):\n def __init__(self, proxy=None):\n self.session = requests.Session()\n self.req_kwargs = {\n 'proxies': {\n 'http': proxy,\n 'https': proxy,\n } if proxy else None,\n 'verify': where(),\n 'timeout': 600,\n 'allow_redirects': False,\n }\n\n self.user_agent = 'pandora/{}'.format(__version__)\n\n super().__init__(proxy, self.req_kwargs['verify'])\n\n def __get_headers(self, api_key):\n return {\n 'Authorization': 'Bearer ' + api_key,\n 'User-Agent': self.user_agent,\n 'Content-Type': 'application/json',\n }\n\n def request(self, api_key, model, messages, stream=True, **kwargs):\n data = {\n 'model': model,\n 'messages': messages,\n **kwargs,\n 'stream': stream,\n }\n\n return self.__request_conversation(api_key, data, stream)\n\n def __request_conversation(self, api_key, data, stream):\n default = default_api_prefix()\n\n if api_key.startswith('fk-') or api_key.startswith('pk-'):\n prefix = default\n else:\n prefix = getenv('OPENAI_API_PREFIX', default)\n url = '{}/v1/chat/completions'.format(prefix)\n\n if stream:\n headers = {**self.__get_headers(api_key), 'Accept': 'text/event-stream'}\n return self._request_sse(url, headers, data)\n\n resp = self.session.post(url=url, headers=self.__get_headers(api_key), json=data, **self.req_kwargs)\n\n def __generate_wrap():\n yield resp.json()\n\n return resp.status_code, resp.headers, __generate_wrap()\n", "path": "src/pandora/openai/api.py", "repo_name": "zhile-io/pandora", "size": 11748 }, { "code": "# -*- coding: utf-8 -*-\n\nimport datetime\nimport re\nfrom datetime import datetime as dt\nfrom urllib.parse import urlparse, parse_qs\n\nimport requests\nfrom certifi import where\n\nfrom ..exts.config import default_api_prefix\n\n\nclass Auth0:\n def __init__(self, email: str, password: str, proxy: str = None, use_cache: bool = True, mfa: str = None):\n self.session_token = None\n self.email = email\n self.password = password\n self.use_cache = use_cache\n self.mfa = mfa\n self.session = requests.Session()\n self.req_kwargs = {\n 'proxies': {\n 'http': proxy,\n 'https': proxy,\n } if proxy else None,\n 'verify': where(),\n 'timeout': 100,\n }\n self.access_token = None\n self.refresh_token = None\n self.expires = None\n self.user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) ' \\\n 'Chrome/109.0.0.0 Safari/537.36'\n\n @staticmethod\n def __check_email(email: str):\n regex = r'\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Z|a-z]{2,7}\\b'\n return re.fullmatch(regex, email)\n\n def auth(self, login_local=False) -> str:\n if self.use_cache and self.access_token and self.expires and self.expires > dt.now():\n return self.access_token\n\n if not self.__check_email(self.email) or not self.password:\n raise Exception('invalid email or password.')\n\n return self.__part_one() if login_local else self.get_access_token_proxy()\n\n def get_refresh_token(self):\n return self.refresh_token\n\n def __part_one(self) -> str:\n url = '{}/auth/preauth'.format(default_api_prefix())\n resp = self.session.get(url, allow_redirects=False, **self.req_kwargs)\n\n if resp.status_code == 200:\n json = resp.json()\n if 'preauth_cookie' not in json or not json['preauth_cookie']:\n raise Exception('Get preauth cookie failed.')\n\n return self.__part_two(json['preauth_cookie'])\n else:\n raise Exception('Error request preauth.')\n\n def __part_two(self, preauth: str) -> str:\n code_challenge = 'w6n3Ix420Xhhu-Q5-mOOEyuPZmAsJHUbBpO8Ub7xBCY'\n code_verifier = 'yGrXROHx_VazA0uovsxKfE263LMFcrSrdm4SlC-rob8'\n\n url = 'https://auth0.openai.com/authorize?client_id=pdlLIX2Y72MIl2rhLhTE9VV9bN905kBh&audience=https%3A%2F' \\\n '%2Fapi.openai.com%2Fv1&redirect_uri=com.openai.chat%3A%2F%2Fauth0.openai.com%2Fios%2Fcom.openai.chat' \\\n '%2Fcallback&scope=openid%20email%20profile%20offline_access%20model.request%20model.read' \\\n '%20organization.read%20offline&response_type=code&code_challenge={}' \\\n '&code_challenge_method=S256&prompt=login&preauth_cookie={}'.format(code_challenge, preauth)\n return self.__part_three(code_verifier, url)\n\n def __part_three(self, code_verifier, url: str) -> str:\n headers = {\n 'User-Agent': self.user_agent,\n 'Referer': 'https://ios.chat.openai.com/',\n }\n resp = self.session.get(url, headers=headers, allow_redirects=True, **self.req_kwargs)\n\n if resp.status_code == 200:\n try:\n url_params = parse_qs(urlparse(resp.url).query)\n state = url_params['state'][0]\n return self.__part_four(code_verifier, state)\n except IndexError as exc:\n raise Exception('Rate limit hit.') from exc\n else:\n raise Exception('Error request login url.')\n\n def __part_four(self, code_verifier: str, state: str) -> str:\n url = 'https://auth0.openai.com/u/login/identifier?state=' + state\n headers = {\n 'User-Agent': self.user_agent,\n 'Referer': url,\n 'Origin': 'https://auth0.openai.com',\n }\n data = {\n 'state': state,\n 'username': self.email,\n 'js-available': 'true',\n 'webauthn-available': 'true',\n 'is-brave': 'false',\n 'webauthn-platform-available': 'false',\n 'action': 'default',\n }\n resp = self.session.post(url, headers=headers, data=data, allow_redirects=False, **self.req_kwargs)\n\n if resp.status_code == 302:\n return self.__part_five(code_verifier, state)\n else:\n raise Exception('Error check email.')\n\n def __part_five(self, code_verifier: str, state: str) -> str:\n url = 'https://auth0.openai.com/u/login/password?state=' + state\n headers = {\n 'User-Agent': self.user_agent,\n 'Referer': url,\n 'Origin': 'https://auth0.openai.com',\n }\n data = {\n 'state': state,\n 'username': self.email,\n 'password': self.password,\n 'action': 'default',\n }\n\n resp = self.session.post(url, headers=headers, data=data, allow_redirects=False, **self.req_kwargs)\n if resp.status_code == 302:\n location = resp.headers['Location']\n if not location.startswith('/authorize/resume?'):\n raise Exception('Login failed.')\n\n return self.__part_six(code_verifier, location, url)\n\n if resp.status_code == 400:\n raise Exception('Wrong email or password.')\n else:\n raise Exception('Error login.')\n\n def __part_six(self, code_verifier: str, location: str, ref: str) -> str:\n url = 'https://auth0.openai.com' + location\n headers = {\n 'User-Agent': self.user_agent,\n 'Referer': ref,\n }\n\n resp = self.session.get(url, headers=headers, allow_redirects=False, **self.req_kwargs)\n if resp.status_code == 302:\n location = resp.headers['Location']\n if location.startswith('/u/mfa-otp-challenge?'):\n if not self.mfa:\n raise Exception('MFA required.')\n return self.__part_seven(code_verifier, location)\n\n if not location.startswith('com.openai.chat://auth0.openai.com/ios/com.openai.chat/callback?'):\n raise Exception('Login callback failed.')\n\n return self.get_access_token(code_verifier, resp.headers['Location'])\n\n raise Exception('Error login.')\n\n def __part_seven(self, code_verifier: str, location: str) -> str:\n url = 'https://auth0.openai.com' + location\n data = {\n 'state': parse_qs(urlparse(url).query)['state'][0],\n 'code': self.mfa,\n 'action': 'default',\n }\n headers = {\n 'User-Agent': self.user_agent,\n 'Referer': url,\n 'Origin': 'https://auth0.openai.com',\n }\n\n resp = self.session.post(url, headers=headers, data=data, allow_redirects=False, **self.req_kwargs)\n if resp.status_code == 302:\n location = resp.headers['Location']\n if not location.startswith('/authorize/resume?'):\n raise Exception('MFA failed.')\n\n return self.__part_six(code_verifier, location, url)\n\n if resp.status_code == 400:\n raise Exception('Wrong MFA code.')\n else:\n raise Exception('Error login.')\n\n def __parse_access_token(self, resp):\n if resp.status_code == 200:\n json = resp.json()\n if 'access_token' not in json:\n raise Exception('Get access token failed, maybe you need a proxy.')\n\n if 'refresh_token' in json:\n self.refresh_token = json['refresh_token']\n\n self.access_token = json['access_token']\n self.expires = dt.utcnow() + datetime.timedelta(seconds=json['expires_in']) - datetime.timedelta(minutes=5)\n return self.access_token\n else:\n raise Exception(resp.text)\n\n def get_access_token(self, code_verifier: str, callback_url: str) -> str:\n url_params = parse_qs(urlparse(callback_url).query)\n\n if 'error' in url_params:\n error = url_params['error'][0]\n error_description = url_params['error_description'][0] if 'error_description' in url_params else ''\n raise Exception('{}: {}'.format(error, error_description))\n\n if 'code' not in url_params:\n raise Exception('Error get code from callback url.')\n\n url = 'https://auth0.openai.com/oauth/token'\n headers = {\n 'User-Agent': self.user_agent,\n }\n data = {\n 'redirect_uri': 'com.openai.chat://auth0.openai.com/ios/com.openai.chat/callback',\n 'grant_type': 'authorization_code',\n 'client_id': 'pdlLIX2Y72MIl2rhLhTE9VV9bN905kBh',\n 'code': url_params['code'][0],\n 'code_verifier': code_verifier,\n }\n resp = self.session.post(url, headers=headers, json=data, allow_redirects=False, **self.req_kwargs)\n\n return self.__parse_access_token(resp)\n\n def get_access_token_proxy(self) -> str:\n url = '{}/auth/login'.format(default_api_prefix())\n headers = {\n 'User-Agent': self.user_agent,\n }\n data = {\n 'username': self.email,\n 'password': self.password,\n 'mfa_code': self.mfa,\n }\n resp = self.session.post(url=url, headers=headers, data=data, allow_redirects=False, **self.req_kwargs)\n\n return self.__parse_access_token(resp)\n", "path": "src/pandora/openai/auth.py", "repo_name": "zhile-io/pandora", "size": 9477 }, { "code": "# -*- coding: utf-8 -*-\n\nimport tiktoken\n\n\ndef gpt_num_tokens(messages, model='gpt-3.5-turbo'):\n encoding = tiktoken.encoding_for_model(model)\n\n num_tokens = 0\n for message in messages:\n num_tokens += 4\n for key, value in message.items():\n num_tokens += len(encoding.encode(value))\n if 'name' == key:\n num_tokens -= 1\n num_tokens += 2\n\n return num_tokens\n", "path": "src/pandora/openai/token.py", "repo_name": "zhile-io/pandora", "size": 421 }, { "code": "# -*- coding: utf-8 -*-\n\nimport os\n\nfrom rich.console import Console as RichConsole\nfrom rich.theme import Theme\n\n\nclass Console:\n __theme = Theme({\n 'info': 'white',\n 'info_b': 'white bold',\n 'debug': 'cyan',\n 'debug_b': 'cyan bold',\n 'warn': 'yellow',\n 'warn_b': 'yellow bold',\n 'error': 'red',\n 'error_b': 'red bold',\n 'success': 'green',\n 'success_b': 'green bold',\n })\n\n __console = RichConsole(theme=__theme)\n\n @staticmethod\n def clear():\n os.system('cls' if 'nt' == os.name else 'clear')\n\n @staticmethod\n def print(msg):\n Console.__console.print(msg)\n\n @staticmethod\n def info(text: str, highlight=False, bold=False, end='\\n'):\n Console.__console.print(text, style='info_b' if bold else 'info', highlight=highlight, end=end, markup=False)\n\n @staticmethod\n def info_b(text: str, highlight=False, end='\\n'):\n Console.info(text, highlight, True, end)\n\n @staticmethod\n def info_h(text: str, bold=False, end='\\n'):\n Console.info(text, True, bold, end)\n\n @staticmethod\n def info_bh(text: str, end='\\n'):\n Console.info(text, True, True, end)\n\n @staticmethod\n def debug(text: str, highlight=False, bold=False, end='\\n'):\n Console.__console.print(text, style='debug_b' if bold else 'debug', highlight=highlight, end=end, markup=False)\n\n @staticmethod\n def debug_b(text: str, highlight=False, end='\\n'):\n Console.debug(text, highlight, True, end)\n\n @staticmethod\n def debug_h(text: str, bold=False, end='\\n'):\n Console.debug(text, True, bold, end)\n\n @staticmethod\n def debug_bh(text: str, end='\\n'):\n Console.debug(text, True, True, end)\n\n @staticmethod\n def error(text: str, highlight=False, bold=False, end='\\n'):\n Console.__console.print(text, style='error_b' if bold else 'error', highlight=highlight, end=end, markup=False)\n\n @staticmethod\n def error_b(text: str, highlight=False, end='\\n'):\n Console.error(text, highlight, True, end)\n\n @staticmethod\n def error_h(text: str, bold=False, end='\\n'):\n Console.error(text, True, bold, end)\n\n @staticmethod\n def error_bh(text: str, end='\\n'):\n Console.error(text, True, True, end)\n\n @staticmethod\n def success(text: str, highlight=False, bold=False, end='\\n'):\n Console.__console.print(text, style='success_b' if bold else 'success', highlight=highlight, end=end,\n markup=False)\n\n @staticmethod\n def success_b(text: str, highlight=False, end='\\n'):\n Console.success(text, highlight, True, end)\n\n @staticmethod\n def success_h(text: str, bold=False, end='\\n'):\n Console.success(text, True, bold, end)\n\n @staticmethod\n def success_bh(text: str, end='\\n'):\n Console.success(text, True, True, end)\n\n @staticmethod\n def warn(text: str, highlight=False, bold=False, end='\\n'):\n Console.__console.print(text, style='warn_b' if bold else 'warn', highlight=highlight, end=end, markup=False)\n\n @staticmethod\n def warn_b(text: str, highlight=False, end='\\n'):\n Console.warn(text, highlight, True, end)\n\n @staticmethod\n def warn_h(text: str, bold=False, end='\\n'):\n Console.warn(text, True, bold, end)\n\n @staticmethod\n def warn_bh(text: str, end='\\n'):\n Console.warn(text, True, True, end)\n", "path": "src/pandora/openai/utils.py", "repo_name": "zhile-io/pandora", "size": 3420 }, { "code": "# -*- coding: utf-8 -*-\n\nimport uuid\nfrom datetime import datetime as dt\n\n\nclass Prompt:\n def __init__(self, prompt_id=None, role=None, content=None, parent=None):\n self.prompt_id = prompt_id or str(uuid.uuid4())\n self.parent_id = None\n self.role = role\n self.content = content\n self.children = []\n self.create_time = dt.now().timestamp()\n\n if parent:\n self.parent_id = parent.prompt_id\n parent.add_child(self.prompt_id)\n\n def add_child(self, prompt_id):\n self.children.append(prompt_id)\n\n def get_message(self, end=True):\n return None\n\n def get_info(self):\n return {\n 'id': self.prompt_id,\n 'message': self.get_message(),\n 'parent': self.parent_id,\n 'children': self.children\n }\n\n\nclass SystemPrompt(Prompt):\n def __init__(self, content, parent):\n super().__init__(role='system', content=content, parent=parent)\n\n def get_message(self, end=True):\n return {\n 'id': self.prompt_id,\n 'author': {\n 'role': self.role,\n 'name': None,\n 'metadata': {}\n },\n 'create_time': self.create_time,\n 'update_time': None,\n 'content': {\n 'content_type': 'text',\n 'parts': ['']\n },\n 'end_turn': True,\n 'weight': 1.0,\n 'metadata': {},\n 'recipient': 'all'\n }\n\n\nclass UserPrompt(Prompt):\n def __init__(self, prompt_id, content, parent):\n super().__init__(prompt_id=prompt_id, role='user', content=content, parent=parent)\n\n def get_message(self, end=True):\n return {\n 'id': self.prompt_id,\n 'author': {\n 'role': self.role,\n 'name': None,\n 'metadata': {}\n },\n 'create_time': self.create_time,\n 'update_time': None,\n 'content': {\n 'content_type': 'text',\n 'parts': [self.content]\n },\n 'end_turn': None,\n 'weight': 1.0,\n 'metadata': {\n 'timestamp_': 'absolute',\n 'message_type': None\n },\n 'recipient': 'all'\n }\n\n\nclass GptPrompt(Prompt):\n def __init__(self, parent, model):\n super().__init__(role='assistant', content='', parent=parent)\n self.model = model\n\n def append_content(self, content):\n self.content += content\n\n return self\n\n def get_message(self, end=True):\n return {\n 'id': self.prompt_id,\n 'author': {\n 'role': self.role,\n 'name': None,\n 'metadata': {}\n },\n 'create_time': self.create_time,\n 'update_time': None,\n 'content': {\n 'content_type': 'text',\n 'parts': [self.content]\n },\n 'end_turn': False if end else None,\n 'weight': 1.0,\n 'metadata': {\n 'message_type': None,\n 'model_slug': self.model,\n 'finish_details': {\n 'type': 'stop'\n } if end else None,\n 'timestamp_': 'absolute'\n },\n 'recipient': 'all'\n }\n\n\nclass Conversation:\n def __init__(self):\n self.conversation_id = str(uuid.uuid4())\n self.title = 'New chat'\n self.create_time = dt.now().timestamp()\n self.current_node = None\n self.prompts = {}\n\n def add_prompt(self, prompt):\n self.prompts[prompt.prompt_id] = prompt\n self.current_node = prompt.prompt_id\n\n return prompt\n\n def get_prompt(self, prompt_id):\n return self.prompts.get(prompt_id)\n\n def get_prompts(self):\n return self.prompts\n\n def set_title(self, title):\n self.title = title\n\n def get_title(self):\n return self.title\n\n def get_messages_directly(self, message_id):\n messages = []\n while True:\n prompt = self.get_prompt(message_id)\n if not prompt.parent_id:\n break\n\n messages.insert(0, {\n 'role': prompt.role,\n 'content': prompt.content\n })\n message_id = prompt.parent_id\n\n return messages\n\n def get_messages(self, message_id, model):\n messages = []\n user_prompt = None\n while True:\n prompt = self.get_prompt(message_id)\n if not prompt.parent_id:\n break\n\n if not user_prompt and isinstance(prompt, UserPrompt):\n user_prompt = prompt\n\n messages.insert(0, {\n 'role': prompt.role,\n 'content': prompt.content\n })\n message_id = prompt.parent_id\n\n return user_prompt, self.add_prompt(GptPrompt(user_prompt, model)), messages\n\n def get_info(self):\n mapping = {}\n for prompt_id in self.prompts:\n mapping[prompt_id] = self.prompts[prompt_id].get_info()\n\n return {\n 'title': self.title,\n 'create_time': self.create_time,\n 'mapping': mapping,\n 'moderation_results': [],\n 'current_node': self.current_node,\n }\n\n\nclass Conversations:\n def __init__(self):\n self.__data = []\n\n def list(self, offset, limit):\n return len(self.__data), self.__data[offset: limit]\n\n def clear(self):\n self.__data = []\n\n def delete(self, conversation):\n self.__data = [x for x in self.__data if conversation.conversation_id != x.conversation_id]\n\n def new(self):\n conversation = Conversation()\n self.__data.insert(0, conversation)\n\n return conversation\n\n def get(self, conversation_id):\n for x in self.__data:\n if x.conversation_id == conversation_id:\n return x\n\n return None\n\n def guard_get(self, conversation_id):\n conversation = self.get(conversation_id)\n if not conversation:\n raise Exception('Can\\'t load conversation {}'.format(conversation_id))\n\n return conversation\n", "path": "src/pandora/turbo/base.py", "repo_name": "zhile-io/pandora", "size": 6263 }, { "code": "# -*- coding: utf-8 -*-\n\nimport json\nfrom datetime import datetime as dt\nfrom os import getenv\n\nfrom requests import Response\n\nfrom .base import Conversations, UserPrompt, Prompt, SystemPrompt\nfrom ..openai.api import ChatCompletion\nfrom ..openai.token import gpt_num_tokens\n\n\nclass TurboGPT:\n DEFAULT_SYSTEM_PROMPT = 'You are ChatGPT, a large language model trained by OpenAI. ' \\\n 'Answer as concisely as possible.\\nKnowledge cutoff: 2021-09-01\\n' \\\n 'Current date: {}'.format(dt.now().strftime('%Y-%m-%d'))\n TITLE_PROMPT = 'Generate a brief title for our conversation.'\n MAX_TOKENS = {\n 'gpt-3.5-turbo': 4096,\n 'gpt-4': 8192,\n 'gpt-4-32k': 32768,\n }\n FAKE_TOKENS = {\n 'gpt-3.5-turbo': 8191,\n 'gpt-4': 4095,\n 'gpt-4-32k': 8195,\n }\n\n def __init__(self, api_keys: dict, proxy=None):\n self.api_keys = api_keys\n self.api_keys_key_list = list(api_keys)\n self.default_api_keys_key = self.api_keys_key_list[0]\n\n self.api = ChatCompletion(proxy)\n self.conversations_map = {}\n self.system_prompt = getenv('API_SYSTEM_PROMPT', self.DEFAULT_SYSTEM_PROMPT)\n\n def __get_conversations(self, api_keys_key=None):\n if api_keys_key is None:\n api_keys_key = self.default_api_keys_key\n\n if api_keys_key not in self.conversations_map:\n self.conversations_map[api_keys_key] = Conversations()\n\n return self.conversations_map[api_keys_key]\n\n def __is_fake_api(self, token=None):\n api_key = self.get_access_token(token)\n return api_key.startswith('fk-') or api_key.startswith('pk-')\n\n\n def get_access_token(self, token_key=None):\n return self.api_keys[token_key or self.default_api_keys_key]\n\n def list_token_keys(self):\n return self.api_keys_key_list\n\n def list_models(self, raw=False, token=None):\n fake_api = self.__is_fake_api(token)\n\n models = {\n 'models': [\n {\n 'slug': 'gpt-3.5-turbo',\n 'max_tokens': self.FAKE_TOKENS['gpt-3.5-turbo'] if fake_api else self.MAX_TOKENS['gpt-3.5-turbo'],\n 'title': 'GPT-3.5',\n 'description': 'Turbo is the api model that powers ChatGPT',\n 'tags': []\n },\n {\n 'slug': 'gpt-4',\n 'max_tokens': self.FAKE_TOKENS['gpt-4'] if fake_api else self.MAX_TOKENS['gpt-4'],\n 'title': 'GPT-4',\n 'description': 'More capable than any GPT-3.5, able to do complex tasks, and optimized for chat',\n 'tags': []\n },\n {\n 'slug': 'gpt-4-32k',\n 'max_tokens': self.FAKE_TOKENS['gpt-4-32k'] if fake_api else self.MAX_TOKENS['gpt-4-32k'],\n 'title': 'GPT-4 32K',\n 'description': 'Same capabilities as the base gpt-4 mode but with 4x the context length',\n 'tags': []\n }\n ]\n }\n\n if raw:\n return self.__wrap_response(models)\n\n return models['models']\n\n def list_conversations(self, offset, limit, raw=False, token=None):\n offset = int(offset)\n limit = int(limit)\n total, items = self.__get_conversations(token).list(offset, limit)\n\n stripped = []\n for item in items:\n stripped.append({\n 'id': item.conversation_id,\n 'title': item.title,\n 'create_time': dt.utcfromtimestamp(item.create_time).isoformat(),\n })\n\n result = {'items': stripped, 'total': total, 'limit': limit, 'offset': offset}\n\n if raw:\n return self.__wrap_response(result)\n\n return result\n\n def get_conversation(self, conversation_id, raw=False, token=None):\n def __shadow():\n try:\n conversation = self.__get_conversations(token).guard_get(conversation_id)\n except Exception as e:\n return self.__out_error(str(e), 404)\n\n return self.__wrap_response(conversation.get_info())\n\n resp = __shadow()\n\n if raw:\n return resp\n\n if resp.status_code != 200:\n raise Exception('get conversation failed: ' + resp.json()['detail'])\n\n return resp.json()\n\n def clear_conversations(self, raw=False, token=None):\n def __shadow():\n self.__get_conversations(token).clear()\n\n result = {\n 'success': True\n }\n\n return self.__wrap_response(result)\n\n resp = __shadow()\n\n if raw:\n return resp\n\n return resp.json()['success']\n\n def del_conversation(self, conversation_id, raw=False, token=None):\n def __shadow():\n conversations = self.__get_conversations(token)\n\n try:\n conversation = conversations.guard_get(conversation_id)\n except Exception as e:\n return self.__out_error(str(e), 404)\n\n conversations.delete(conversation)\n\n result = {\n 'success': True\n }\n\n return self.__wrap_response(result)\n\n resp = __shadow()\n\n if raw:\n return resp\n\n if resp.status_code != 200:\n raise Exception('delete conversation failed: ' + resp.json()['detail'])\n\n return resp.json()['success']\n\n def gen_conversation_title(self, conversation_id, model, message_id, raw=False, token=None):\n def __shadow():\n conversation = self.__get_conversations(token).get(conversation_id)\n if not conversation:\n return self.__out_error('Conversation not found', 404)\n\n if 'New chat' != conversation.title:\n message = {\n 'message': 'Conversation {} already has title \\'{}\\''.format(conversation_id, conversation.title)\n }\n return self.__wrap_response(message)\n\n messages = conversation.get_messages_directly(message_id)\n messages.append({'role': 'user', 'content': self.TITLE_PROMPT})\n\n status, header, generator = self.api.request(self.get_access_token(token), model, messages, False)\n last_ok, last = self.__get_completion(status, next(generator))\n\n if not last_ok:\n return self.__out_error(last['detail'], status)\n\n conversation.set_title(last.strip('\"'))\n\n result = {\n 'title': conversation.title\n }\n\n return self.__wrap_response(result)\n\n resp = __shadow()\n\n if raw:\n return resp\n\n if resp.status_code != 200:\n raise Exception('generate title failed: ' + resp.text)\n\n return resp.json()['title']\n\n def set_conversation_title(self, conversation_id, title, raw=False, token=None):\n def __shadow():\n try:\n conversation = self.__get_conversations(token).guard_get(conversation_id)\n except Exception as e:\n return self.__out_error(str(e), 404)\n\n conversation.set_title(title)\n\n result = {\n 'success': True\n }\n\n return self.__wrap_response(result)\n\n resp = __shadow()\n\n if raw:\n return resp\n\n if resp.status_code != 200:\n raise Exception('update conversation failed: ' + resp.json()['detail'])\n\n return resp.json()['success']\n\n def talk(self, content, model, message_id, parent_message_id, conversation_id=None, stream=True, token=None):\n system_prompt = None\n if conversation_id:\n conversation = self.__get_conversations(token).get(conversation_id)\n if not conversation:\n return self.__out_error_stream('Conversation not found', 404)\n\n parent = conversation.get_prompt(parent_message_id)\n else:\n conversation = self.__get_conversations(token).new()\n parent = conversation.add_prompt(Prompt(parent_message_id))\n parent = system_prompt = conversation.add_prompt(SystemPrompt(self.system_prompt, parent))\n\n conversation.add_prompt(UserPrompt(message_id, content, parent))\n\n user_prompt, gpt_prompt, messages = conversation.get_messages(message_id, model)\n try:\n status, headers, generator = self.api.request(self.get_access_token(token), model,\n self.__reduce_messages(messages, model, token), stream)\n except Exception as e:\n return self.__out_error_stream(str(e))\n\n def __out_generator():\n if 200 == status and system_prompt and stream:\n yield self.__out_stream(conversation, system_prompt)\n yield self.__out_stream(conversation, user_prompt)\n\n for line in generator:\n yield self.__map_conversation(status, conversation, gpt_prompt, line)\n\n return status, headers, __out_generator()\n\n def goon(self, model, parent_message_id, conversation_id, stream=True, token=None):\n return self.regenerate_reply(None, model, conversation_id, parent_message_id, None, stream, token)\n\n def regenerate_reply(self, prompt, model, conversation_id, message_id, parent_message_id, stream=True, token=None):\n if not conversation_id:\n return self.__out_error_stream('Miss conversation_id', 400)\n\n conversation = self.__get_conversations(token).get(conversation_id)\n if not conversation:\n return self.__out_error_stream('Conversation not found', 404)\n\n user_prompt, gpt_prompt, messages = conversation.get_messages(message_id, model)\n try:\n status, headers, generator = self.api.request(self.get_access_token(token), model,\n self.__reduce_messages(messages, model, token), stream)\n except Exception as e:\n return self.__out_error_stream(str(e))\n\n def __out_generator():\n for line in generator:\n yield self.__map_conversation(status, conversation, gpt_prompt, line)\n\n return status, headers, __out_generator()\n\n def __reduce_messages(self, messages, model, token=None):\n max_tokens = self.FAKE_TOKENS[model] if self.__is_fake_api(token) else self.MAX_TOKENS[model]\n\n while gpt_num_tokens(messages) > max_tokens - 200:\n if len(messages) < 2:\n raise Exception('prompt too long')\n\n messages.pop(1)\n\n return messages\n\n def __out_error(self, error, status=500):\n result = {\n 'detail': error\n }\n\n return self.__wrap_response(result, status)\n\n def __out_error_stream(self, error, status=500):\n resp = self.__out_error(error, status)\n\n def __generator():\n yield resp.json()\n\n return resp.status_code, resp.headers, __generator()\n\n @staticmethod\n def __out_stream(conversation, prompt, end=True):\n return {\n 'message': prompt.get_message(end),\n 'conversation_id': conversation.conversation_id,\n 'error': None,\n }\n\n @staticmethod\n def __wrap_response(data, status=200):\n resp = Response()\n resp.status_code = status\n resp._content = json.dumps(data).encode('utf-8')\n resp.headers['Content-Type'] = 'application/json'\n\n return resp\n\n @staticmethod\n def __get_completion(status, data):\n if status != 200:\n error = data['error']['message'] if 'error' in data else 'Unknown error'\n result = {\n 'detail': error\n }\n return False, result\n\n choice = data['choices'][0]\n if 'message' in choice:\n text = choice['message'].get('content', '')\n else:\n text = choice['delta'].get('content', '')\n\n return True, text\n\n def __map_conversation(self, status, conversation, gpt_prompt, data):\n success, result = self.__get_completion(status, data)\n if not success:\n return result\n\n choice = data['choices'][0]\n is_stop = 'stop' == choice['finish_reason']\n\n return self.__out_stream(conversation, gpt_prompt.append_content(result), is_stop)\n", "path": "src/pandora/turbo/chat.py", "repo_name": "zhile-io/pandora", "size": 12455 } ]
tairov/llama2.mojo
python
2023-09-10T21:54:51
MIT License
Inference Llama 2 in one file of pure 🔥
1,236
76
https://github.com/tairov/llama2.mojo
[ { "code": "import gradio as gr\nimport subprocess\nimport sys\nfrom pathlib import Path\n\nasync def generate(prompt, model_name, seed=0, temperature=0.5, num_tokens=256):\n # stream stout\n base = \"\"#\"../model/\"\n tokenizer_name = \"tokenizer.bin\"\n if model_name == \"tl-chat.bin\":\n tokenizer_name = 'tok_tl-chat.bin'\n process = subprocess.Popen(\n [\n \"mojo\",\n \"llama2.mojo\",\n Path(base + model_name),\n \"-s\",\n str(seed),\n \"-n\",\n str(num_tokens),\n \"-t\",\n str(temperature),\n \"-i\",\n prompt,\n \"-z\",\n Path(base + tokenizer_name)\n ],\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n )\n text = \"\"\n for char in iter(lambda: process.stdout.read(1), b\"\"):\n char_decoded = char.decode(\"utf-8\", errors=\"ignore\")\n text += char_decoded\n yield text\n\n\nwith gr.Blocks() as demo:\n gr.Markdown(\n \"\"\"\n# llama2.🔥\n## [Mojo](https://docs.modular.com/mojo/) implementation of [llama2.c](https://github.com/karpathy/llama2.c) by [@tairov](https://github.com/tairov)\nSource: https://github.com/tairov/llama2.mojo\n \"\"\"\n )\n with gr.Row():\n with gr.Column():\n prompt = gr.Textbox(label=\"Prompt\", placeholder=\"Add your prompt here...\")\n seed = gr.Slider(\n minimum=0,\n maximum=2**53,\n value=0,\n step=1,\n label=\"Seed\",\n randomize=True,\n )\n temperature = gr.Slider(\n minimum=0.0, maximum=2.0, step=0.01, value=0.0, label=\"Temperature\"\n )\n num_tokens = gr.Slider(\n minimum=1, maximum=256, value=256, label=\"Number of tokens\"\n )\n model_name = gr.Dropdown(\n [\"stories15M.bin\", \"stories42M.bin\", \"stories110M.bin\", \"tl-chat.bin\"],\n value=\"stories15M.bin\",\n label=\"Model Size\",\n )\n with gr.Row():\n stop = gr.Button(\"Stop\")\n run = gr.Button(\"Run\")\n with gr.Column(scale=2):\n output_text = gr.Textbox(label=\"Generated Text\")\n\n # update maximum number of tokens based on model size\n model_name.change(\n lambda x: gr.update(maximum=1024)\n if x == \"stories110M.bin\" or x == \"stories42M.bin\" or x == \"tl-chat.bin\"\n else gr.update(maximum=256),\n model_name,\n num_tokens,\n queue=False,\n )\n click_event = run.click(\n fn=generate,\n inputs=[prompt, model_name, seed, temperature, num_tokens],\n outputs=output_text,\n )\n stop.click(fn=None, inputs=None, outputs=None, cancels=[click_event])\n\ndemo.queue()\ndemo.launch(server_name=\"0.0.0.0\")\n", "path": "gradio_app.py", "repo_name": "tairov/llama2.mojo", "size": 2826 } ]
Pennyw0rth/NetExec
python
2023-09-08T15:36:00
BSD 2-Clause "Simplified" License
The Network Execution Tool
794
71
https://github.com/Pennyw0rth/NetExec
[{"code":"#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\nimport os\nimport shutil\nimport subpro(...TRUNCATED)
docker/genai-stack
python
2023-09-13T12:03:09
Creative Commons Zero v1.0 Universal
Langchain + Docker + Neo4j
548
93
https://github.com/docker/genai-stack
[{"code":"import os\n\nfrom langchain.graphs import Neo4jGraph\nfrom dotenv import load_dotenv\nfrom(...TRUNCATED)
nicolas-hbt/pygraft
python
2023-09-07T04:28:45
MIT License
Configurable Generation of Synthetic Schemas and Knowledge Graphs at Your Fingertips
513
35
https://github.com/nicolas-hbt/pygraft
[{"code":"# Configuration file for the Sphinx documentation builder.\n\nfrom datetime import date\ni(...TRUNCATED)
kbre93/every-breath-you-take
python
2023-09-16T09:19:34
MIT License
Heart Rate Variability Training with the Polar H10 Monitor
500
19
https://github.com/kbre93/every-breath-you-take
[{"code":"\nimport os\nos.environ['QT_API'] = 'PySide6' # For qasync to know which binding is being (...TRUNCATED)
mohamed-chs/chatgpt-history-export-to-md
python
2023-09-16T05:35:37
MIT License
"A Python script to effortlessly extract and format your ChatGPT conversations data export from JSON(...TRUNCATED)
486
14
https://github.com/mohamed-chs/chatgpt-history-export-to-md
[{"code":"\"\"\"Module for handling user configuration and updating the models.\"\"\"\n\nimport json(...TRUNCATED)
google/break-a-scene
python
2023-09-13T08:10:12
Apache License 2.0
"Official implementation for \"Break-A-Scene: Extracting Multiple Concepts from a Single Image\" [SI(...TRUNCATED)
363
13
https://github.com/google/break-a-scene
[{"code":"\"\"\"\nCopyright 2023 Google LLC\n\nLicensed under the Apache License, Version 2.0 (the \(...TRUNCATED)
persimmon-ai-labs/adept-inference
python
2023-09-07T15:02:28
Apache License 2.0
Inference code for Persimmon-8B
336
16
https://github.com/persimmon-ai-labs/adept-inference
[{"code":"# coding=utf-8\n# Copyright (c) 2023 ADEPT AI LABS INC.\n# This file is based on code by t(...TRUNCATED)
google-deepmind/alphamissense
python
2023-09-13T14:34:42
Apache License 2.0
null
327
32
https://github.com/google-deepmind/alphamissense
[{"code":"# Copyright 2023 DeepMind Technologies Limited\n#\n# Licensed under the Apache License, Ve(...TRUNCATED)

Dataset Card for "repo_dedup_sep2023"

More Information needed

Downloads last month
38