Update README.md
Browse files
README.md
CHANGED
@@ -27,12 +27,12 @@ Unless you are able to use the latest GPTQ-for-LLaMa code, please use `medalpaca
|
|
27 |
* `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors`
|
28 |
* Created with the latest GPTQ-for-LLaMa code
|
29 |
* Parameters: Groupsize = 128g. No act-order.
|
30 |
-
* Command: `CUDA_VISIBLE_DEVICES=0 python3 llama.py medalpaca-13b c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors
|
31 |
* `medalpaca-13B-GPTQ-4bit-128g.safetensors`
|
32 |
* Created with the latest GPTQ-for-LLaMa code
|
33 |
* Parameters: Groupsize = 128g. act-order.
|
34 |
* Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
|
35 |
-
* Command: `CUDA_VISIBLE_DEVICES=0 python3 llama.py medalpaca-13b c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors medalpaca-13B-GPTQ-4bit-128g.safetensors
|
36 |
|
37 |
## How to run in `text-generation-webui`
|
38 |
|
|
|
27 |
* `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors`
|
28 |
* Created with the latest GPTQ-for-LLaMa code
|
29 |
* Parameters: Groupsize = 128g. No act-order.
|
30 |
+
* Command: `CUDA_VISIBLE_DEVICES=0 python3 llama.py medalpaca-13b c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors`
|
31 |
* `medalpaca-13B-GPTQ-4bit-128g.safetensors`
|
32 |
* Created with the latest GPTQ-for-LLaMa code
|
33 |
* Parameters: Groupsize = 128g. act-order.
|
34 |
* Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
|
35 |
+
* Command: `CUDA_VISIBLE_DEVICES=0 python3 llama.py medalpaca-13b c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors medalpaca-13B-GPTQ-4bit-128g.safetensors`
|
36 |
|
37 |
## How to run in `text-generation-webui`
|
38 |
|