New GGMLv3 format for breaking llama.cpp change May 19th commit 2d5db48
Browse files
README.md
CHANGED
@@ -21,27 +21,29 @@ This repo is the result of quantising to 4bit and 5bit GGML for CPU inference us
|
|
21 |
* [4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GGML).
|
22 |
* [float16 HF format model for GPU inference and further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-HF).
|
23 |
|
24 |
-
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May
|
25 |
|
26 |
-
llama.cpp recently made
|
27 |
|
28 |
-
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May
|
|
|
|
|
29 |
|
30 |
## Provided files
|
31 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
32 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
33 |
-
`Wizard-Vicuna-7B-Uncensored.
|
34 |
-
`Wizard-Vicuna-7B-Uncensored.
|
35 |
-
`Wizard-Vicuna-7B-Uncensored.
|
36 |
-
`Wizard-Vicuna-7B-Uncensored.
|
37 |
-
`Wizard-Vicuna-7B-Uncensored.
|
38 |
|
39 |
## How to run in `llama.cpp`
|
40 |
|
41 |
I use the following command line; adjust for your tastes and needs:
|
42 |
|
43 |
```
|
44 |
-
./main -t 8 -m Wizard-Vicuna-7B-Uncensored.
|
45 |
```
|
46 |
|
47 |
Change `-t 8` to the number of physical CPU cores you have.
|
@@ -52,6 +54,8 @@ GGML models can be loaded into text-generation-webui by installing the llama.cpp
|
|
52 |
|
53 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
54 |
|
|
|
|
|
55 |
# Original model card
|
56 |
|
57 |
This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained against LLaMA-7B with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
|
|
|
21 |
* [4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GGML).
|
22 |
* [float16 HF format model for GPU inference and further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-HF).
|
23 |
|
24 |
+
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
25 |
|
26 |
+
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
|
27 |
|
28 |
+
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
|
29 |
+
|
30 |
+
For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
|
31 |
|
32 |
## Provided files
|
33 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
34 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
35 |
+
`Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_0.bin` | q4_0 | 4bit | 4.21GB | 7.0GB | 4-bit. |
|
36 |
+
`Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_1.bin` | q4_1 | 4bit | 4.63GB | 7.5GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
37 |
+
`Wizard-Vicuna-7B-Uncensored.ggmlv3.q5_0.bin` | q5_0 | 5bit | 4.63GB | 7.5GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
38 |
+
`Wizard-Vicuna-7B-Uncensored.ggmlv3.q5_1.bin` | q5_1 | 5bit | 5.06GB | 7.5GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
|
39 |
+
`Wizard-Vicuna-7B-Uncensored.ggmlv3.q8_0.bin` | q8_0 | 8bit | 7.58GB | 9.0GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
|
40 |
|
41 |
## How to run in `llama.cpp`
|
42 |
|
43 |
I use the following command line; adjust for your tastes and needs:
|
44 |
|
45 |
```
|
46 |
+
./main -t 8 -m Wizard-Vicuna-7B-Uncensored.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
|
47 |
```
|
48 |
|
49 |
Change `-t 8` to the number of physical CPU cores you have.
|
|
|
54 |
|
55 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
56 |
|
57 |
+
Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
|
58 |
+
|
59 |
# Original model card
|
60 |
|
61 |
This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained against LLaMA-7B with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
|