Nekochu commited on
Commit
e527b61
·
verified ·
1 Parent(s): 6d09bbe

Update tabbed.py

Browse files
Files changed (1) hide show
  1. tabbed.py +1 -1
tabbed.py CHANGED
@@ -122,7 +122,7 @@ with gr.Blocks() as demo:
122
  ### This is the [{config["hub"]["repo_id"]}](https://huggingface.co/{config["hub"]["repo_id"]}) quantized model file [{config["hub"]["filename"]}](https://huggingface.co/{config["hub"]["repo_id"]}/blob/main/{config["hub"]["filename"]})
123
 
124
  <details>
125
- <summary><a href="https://huggingface.co/spaces/Nekochu/Llama-2-13B-novel17-french-GGUF?duplicate=true">Duplicate the Space</a> to skip the queue and run in a private space or to use your own GGUF models, simply update the <a href="https://huggingface.co/spaces/Nekochu/Llama-2-13B-novel17-french-GGUF/blob/main/config.yml">config.yml</a></summary>
126
  <ul>
127
  <li>This Space uses GGUF with GPU support, so it can quickly run larger models on smaller GPUs & VRAM. <a href="https://github.com/OpenAccess-AI-Collective/ggml-webui">[Contribute]</a></li>
128
  <li>This is running on a smaller, shared GPU, so it may take a few seconds to respond.</li>
 
122
  ### This is the [{config["hub"]["repo_id"]}](https://huggingface.co/{config["hub"]["repo_id"]}) quantized model file [{config["hub"]["filename"]}](https://huggingface.co/{config["hub"]["repo_id"]}/blob/main/{config["hub"]["filename"]})
123
 
124
  <details>
125
+ <summary><a href="https://huggingface.co/spaces/Nekochu/Luminia-13B-v3-GGUF?duplicate=true">Duplicate the Space</a> to skip the queue and run in a private space or to use your own GGUF models, simply update the <a href="https://huggingface.co/spaces/Nekochu/Luminia-13B-v3-GGUF/blob/main/config.yml">config.yml</a></summary>
126
  <ul>
127
  <li>This Space uses GGUF with GPU support, so it can quickly run larger models on smaller GPUs & VRAM. <a href="https://github.com/OpenAccess-AI-Collective/ggml-webui">[Contribute]</a></li>
128
  <li>This is running on a smaller, shared GPU, so it may take a few seconds to respond.</li>