TheBloke commited on
Commit
f13e4f8
·
1 Parent(s): c02fcf6

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -0
README.md CHANGED
@@ -68,10 +68,12 @@ Here is an incomplate list of clients and libraries that are known to support GG
68
  SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
69
  USER: {prompt}
70
  ASSISTANT:
 
71
  ```
72
 
73
  <!-- prompt-template end -->
74
 
 
75
  <!-- compatibility_gguf start -->
76
  ## Compatibility
77
 
@@ -150,6 +152,63 @@ del synthia-70b-v1.2b.Q8_0.gguf-split-a synthia-70b-v1.2b.Q8_0.gguf-split-b
150
  </details>
151
  <!-- README_GGUF.md-provided-files end -->
152
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
153
  <!-- README_GGUF.md-how-to-run start -->
154
  ## Example `llama.cpp` command
155
 
 
68
  SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
69
  USER: {prompt}
70
  ASSISTANT:
71
+
72
  ```
73
 
74
  <!-- prompt-template end -->
75
 
76
+
77
  <!-- compatibility_gguf start -->
78
  ## Compatibility
79
 
 
152
  </details>
153
  <!-- README_GGUF.md-provided-files end -->
154
 
155
+ <!-- README_GGUF.md-how-to-download start -->
156
+ ## How to download GGUF files
157
+
158
+ **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
159
+
160
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
161
+ - LM Studio
162
+ - LoLLMS Web UI
163
+ - Faraday.dev
164
+
165
+ ### In `text-generation-webui`
166
+
167
+ Under Download Model, you can enter the model repo: TheBloke/Synthia-70B-v1.2b-GGUF and below it, a specific filename to download, such as: synthia-70b-v1.2b.q4_K_M.gguf.
168
+
169
+ Then click Download.
170
+
171
+ ### On the command line, including multiple files at once
172
+
173
+ I recommend using the `huggingface-hub` Python library:
174
+
175
+ ```shell
176
+ pip3 install huggingface-hub>=0.17.1
177
+ ```
178
+
179
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
180
+
181
+ ```shell
182
+ huggingface-cli download TheBloke/Synthia-70B-v1.2b-GGUF synthia-70b-v1.2b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
183
+ ```
184
+
185
+ <details>
186
+ <summary>More advanced huggingface-cli download usage</summary>
187
+
188
+ You can also download multiple files at once with a pattern:
189
+
190
+ ```shell
191
+ huggingface-cli download TheBloke/Synthia-70B-v1.2b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
192
+ ```
193
+
194
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
195
+
196
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
197
+
198
+ ```shell
199
+ pip3 install hf_transfer
200
+ ```
201
+
202
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
203
+
204
+ ```shell
205
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Synthia-70B-v1.2b-GGUF synthia-70b-v1.2b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
206
+ ```
207
+
208
+ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
209
+ </details>
210
+ <!-- README_GGUF.md-how-to-download end -->
211
+
212
  <!-- README_GGUF.md-how-to-run start -->
213
  ## Example `llama.cpp` command
214