wangzhang commited on
Commit
95df1b5
·
1 Parent(s): 50f8cde

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +7 -18
app.py CHANGED
@@ -11,22 +11,13 @@ DEFAULT_MAX_NEW_TOKENS = 1024
11
  MAX_INPUT_TOKEN_LENGTH = 4096
12
 
13
  DESCRIPTION = """\
14
- # Llama-2 7B Chat
15
 
16
- This Space demonstrates model [Llama-2-7b-chat](https://huggingface.co/meta-llama/Llama-2-7b-chat) by Meta, a Llama 2 model with 7B parameters fine-tuned for chat instructions. Feel free to play with it, or duplicate to run generations without a queue! If you want to run your own service, you can also [deploy the model on Inference Endpoints](https://huggingface.co/inference-endpoints).
17
-
18
- 🔎 For more details about the Llama 2 family of models and how to use them with `transformers`, take a look [at our blog post](https://huggingface.co/blog/llama2).
19
-
20
- 🔨 Looking for an even more powerful model? Check out the [13B version](https://huggingface.co/spaces/huggingface-projects/llama-2-13b-chat) or the large [70B model demo](https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI).
21
  """
22
 
23
- LICENSE = """
24
- <p/>
25
-
26
- ---
27
- As a derivate work of [Llama-2-7b-chat](https://huggingface.co/meta-llama/Llama-2-7b-chat) by Meta,
28
- this demo is governed by the original [license](https://huggingface.co/spaces/huggingface-projects/llama-2-7b-chat/blob/main/LICENSE.txt) and [acceptable use policy](https://huggingface.co/spaces/huggingface-projects/llama-2-7b-chat/blob/main/USE_POLICY.md).
29
- """
30
 
31
  if not torch.cuda.is_available():
32
  DESCRIPTION += "\n<p>Running on CPU 🥶 This demo does not work on CPU.</p>"
@@ -126,11 +117,9 @@ chat_interface = gr.ChatInterface(
126
  ],
127
  stop_btn=None,
128
  examples=[
129
- ["Hello there! How are you doing?"],
130
- ["Can you explain briefly to me what is the Python programming language?"],
131
- ["Explain the plot of Cinderella in a sentence."],
132
- ["How many hours does it take a man to eat a Helicopter?"],
133
- ["Write a 100-word article on 'Benefits of Open-Source in AI research'"],
134
  ],
135
  )
136
 
 
11
  MAX_INPUT_TOKEN_LENGTH = 4096
12
 
13
  DESCRIPTION = """\
14
+ # ChatSDB
15
 
16
+ 这是SequioaDB旗下的AI智能大语言模型,训练超过上万条真实数据和7亿参数。
17
+ ChatSDB是SequoiaDB旗下的AI智能大语言模型,训练超过上万条真实数据和7亿参数</h3><strong>模型🔗: <a>https://huggingface.co/wangzhang/ChatSDB </a></strong><br><strong>Dataset🔗: <a>https://huggingface.co/datasets/wangzhang/sdb </a></strong><br><strong> API Doc🔗: <a>https://zgg3nzdpswxy4a-80.proxy.runpod.net/docs/ <a> </strong>
 
 
 
18
  """
19
 
20
+ LICENSE = """ """
 
 
 
 
 
 
21
 
22
  if not torch.cuda.is_available():
23
  DESCRIPTION += "\n<p>Running on CPU 🥶 This demo does not work on CPU.</p>"
 
117
  ],
118
  stop_btn=None,
119
  examples=[
120
+ ["如何安装SequioaDB?"],
121
+ ["SequioaDB有哪些优势?"],
122
+ ["SequioaDB是什么?"],
 
 
123
  ],
124
  )
125