Spaces:
Runtime error
Runtime error
Commit
Β·
4dd4930
1
Parent(s):
9740d69
Rename app (13).py to app.py
Browse files- app (13).py β app.py +4 -4
app (13).py β app.py
RENAMED
@@ -13,13 +13,13 @@ DEFAULT_MAX_NEW_TOKENS = 1024
|
|
13 |
MAX_INPUT_TOKEN_LENGTH = 4000
|
14 |
|
15 |
DESCRIPTION = """
|
16 |
-
#
|
17 |
|
18 |
-
π» This Space demonstrates model [
|
19 |
|
20 |
-
π For more details about the
|
21 |
|
22 |
-
ππ» Check out our [Playground](https://huggingface.co/spaces/
|
23 |
|
24 |
"""
|
25 |
|
|
|
13 |
MAX_INPUT_TOKEN_LENGTH = 4000
|
14 |
|
15 |
DESCRIPTION = """
|
16 |
+
# Zephyr-7b Chat
|
17 |
|
18 |
+
π» This Space demonstrates model [Zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) by HuggingFaceH4, a Zephyr model with 7B parameters fine-tuned for chat instructions and specialized on many tasks. Feel free to play with it, or duplicate to run generations without a queue! If you want to run your own service, you can also [deploy the model on Inference Endpoints](https://huggingface.co/inference-endpoints).
|
19 |
|
20 |
+
π For more details about the Zephyr family of models and how to use them with `transformers`, take a look [at our blog post](https://huggingface.co/blog/zephyr).
|
21 |
|
22 |
+
ππ» Check out our [Playground](https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat) for a super-fast code completion demo that leverages a streaming [inference endpoint](https://huggingface.co/inference-endpoints).
|
23 |
|
24 |
"""
|
25 |
|