TheBloke commited on
Commit
d43cdc9
1 Parent(s): 6360485

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -170,7 +170,7 @@ The following clients/libraries will automatically download models for you, prov
170
 
171
  ### In `text-generation-webui`
172
 
173
- Under Download Model, you can enter the model repo: TheBloke/13B-Ouroboros-GGUF and below it, a specific filename to download, such as: 13b-ouroboros.q4_K_M.gguf.
174
 
175
  Then click Download.
176
 
@@ -185,7 +185,7 @@ pip3 install huggingface-hub>=0.17.1
185
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
186
 
187
  ```shell
188
- huggingface-cli download TheBloke/13B-Ouroboros-GGUF 13b-ouroboros.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
189
  ```
190
 
191
  <details>
@@ -208,7 +208,7 @@ pip3 install hf_transfer
208
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
209
 
210
  ```shell
211
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/13B-Ouroboros-GGUF 13b-ouroboros.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
212
  ```
213
 
214
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
@@ -221,7 +221,7 @@ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running
221
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
222
 
223
  ```shell
224
- ./main -ngl 32 -m 13b-ouroboros.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
225
  ```
226
 
227
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
@@ -261,7 +261,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
261
  from ctransformers import AutoModelForCausalLM
262
 
263
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
264
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/13B-Ouroboros-GGUF", model_file="13b-ouroboros.q4_K_M.gguf", model_type="llama", gpu_layers=50)
265
 
266
  print(llm("AI is going to"))
267
  ```
 
170
 
171
  ### In `text-generation-webui`
172
 
173
+ Under Download Model, you can enter the model repo: TheBloke/13B-Ouroboros-GGUF and below it, a specific filename to download, such as: 13b-ouroboros.Q4_K_M.gguf.
174
 
175
  Then click Download.
176
 
 
185
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
186
 
187
  ```shell
188
+ huggingface-cli download TheBloke/13B-Ouroboros-GGUF 13b-ouroboros.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
189
  ```
190
 
191
  <details>
 
208
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
209
 
210
  ```shell
211
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/13B-Ouroboros-GGUF 13b-ouroboros.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
212
  ```
213
 
214
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
 
221
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
222
 
223
  ```shell
224
+ ./main -ngl 32 -m 13b-ouroboros.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
225
  ```
226
 
227
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
 
261
  from ctransformers import AutoModelForCausalLM
262
 
263
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
264
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/13B-Ouroboros-GGUF", model_file="13b-ouroboros.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
265
 
266
  print(llm("AI is going to"))
267
  ```