Spaces:
Sleeping
Sleeping
Update tabbed.py
Browse files
tabbed.py
CHANGED
@@ -122,7 +122,7 @@ with gr.Blocks() as demo:
|
|
122 |
### This is the [{config["hub"]["repo_id"]}](https://huggingface.co/{config["hub"]["repo_id"]}) quantized model file [{config["hub"]["filename"]}](https://huggingface.co/{config["hub"]["repo_id"]}/blob/main/{config["hub"]["filename"]})
|
123 |
|
124 |
<details>
|
125 |
-
<summary><a href="https://huggingface.co/spaces/Nekochu/
|
126 |
<ul>
|
127 |
<li>This Space uses GGUF with GPU support, so it can quickly run larger models on smaller GPUs & VRAM. <a href="https://github.com/OpenAccess-AI-Collective/ggml-webui">[Contribute]</a></li>
|
128 |
<li>This is running on a smaller, shared GPU, so it may take a few seconds to respond.</li>
|
|
|
122 |
### This is the [{config["hub"]["repo_id"]}](https://huggingface.co/{config["hub"]["repo_id"]}) quantized model file [{config["hub"]["filename"]}](https://huggingface.co/{config["hub"]["repo_id"]}/blob/main/{config["hub"]["filename"]})
|
123 |
|
124 |
<details>
|
125 |
+
<summary><a href="https://huggingface.co/spaces/Nekochu/Luminia-13B-v3-GGUF?duplicate=true">Duplicate the Space</a> to skip the queue and run in a private space or to use your own GGUF models, simply update the <a href="https://huggingface.co/spaces/Nekochu/Luminia-13B-v3-GGUF/blob/main/config.yml">config.yml</a></summary>
|
126 |
<ul>
|
127 |
<li>This Space uses GGUF with GPU support, so it can quickly run larger models on smaller GPUs & VRAM. <a href="https://github.com/OpenAccess-AI-Collective/ggml-webui">[Contribute]</a></li>
|
128 |
<li>This is running on a smaller, shared GPU, so it may take a few seconds to respond.</li>
|