Transformers
GGUF
llama
uncensored
TheBloke commited on
Commit
d5aee61
1 Parent(s): 3e1dd31

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -12
README.md CHANGED
@@ -85,15 +85,8 @@ ASSISTANT:
85
  ```
86
 
87
  <!-- prompt-template end -->
88
- <!-- licensing start -->
89
- ## Licensing
90
 
91
- The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
92
 
93
- As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
94
-
95
- In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Eric Hartford's Wizardlm 13B Uncensored](https://huggingface.co/ehartford/WizardLM-13B-Uncensored).
96
- <!-- licensing end -->
97
  <!-- compatibility_gguf start -->
98
  ## Compatibility
99
 
@@ -152,7 +145,7 @@ The following clients/libraries will automatically download models for you, prov
152
 
153
  ### In `text-generation-webui`
154
 
155
- Under Download Model, you can enter the model repo: TheBloke/WizardLM-13B-Uncensored-GGUF and below it, a specific filename to download, such as: WizardLM-13B-Uncensored.q4_K_M.gguf.
156
 
157
  Then click Download.
158
 
@@ -167,7 +160,7 @@ pip3 install huggingface-hub>=0.17.1
167
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
168
 
169
  ```shell
170
- huggingface-cli download TheBloke/WizardLM-13B-Uncensored-GGUF WizardLM-13B-Uncensored.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
171
  ```
172
 
173
  <details>
@@ -190,7 +183,7 @@ pip3 install hf_transfer
190
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
191
 
192
  ```shell
193
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardLM-13B-Uncensored-GGUF WizardLM-13B-Uncensored.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
194
  ```
195
 
196
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
@@ -203,7 +196,7 @@ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running
203
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
204
 
205
  ```shell
206
- ./main -ngl 32 -m WizardLM-13B-Uncensored.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
207
  ```
208
 
209
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
@@ -243,7 +236,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
243
  from ctransformers import AutoModelForCausalLM
244
 
245
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
246
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardLM-13B-Uncensored-GGUF", model_file="WizardLM-13B-Uncensored.q4_K_M.gguf", model_type="llama", gpu_layers=50)
247
 
248
  print(llm("AI is going to"))
249
  ```
 
85
  ```
86
 
87
  <!-- prompt-template end -->
 
 
88
 
 
89
 
 
 
 
 
90
  <!-- compatibility_gguf start -->
91
  ## Compatibility
92
 
 
145
 
146
  ### In `text-generation-webui`
147
 
148
+ Under Download Model, you can enter the model repo: TheBloke/WizardLM-13B-Uncensored-GGUF and below it, a specific filename to download, such as: WizardLM-13B-Uncensored.Q4_K_M.gguf.
149
 
150
  Then click Download.
151
 
 
160
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
161
 
162
  ```shell
163
+ huggingface-cli download TheBloke/WizardLM-13B-Uncensored-GGUF WizardLM-13B-Uncensored.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
164
  ```
165
 
166
  <details>
 
183
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
184
 
185
  ```shell
186
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardLM-13B-Uncensored-GGUF WizardLM-13B-Uncensored.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
187
  ```
188
 
189
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
 
196
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
197
 
198
  ```shell
199
+ ./main -ngl 32 -m WizardLM-13B-Uncensored.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
200
  ```
201
 
202
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
 
236
  from ctransformers import AutoModelForCausalLM
237
 
238
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
239
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardLM-13B-Uncensored-GGUF", model_file="WizardLM-13B-Uncensored.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
240
 
241
  print(llm("AI is going to"))
242
  ```