TheBloke commited on
Commit
a548e56
·
1 Parent(s): 1c82f5d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -61,10 +61,11 @@ cd text-generation-webui
61
  python server.py --model gpt4-alpaca-lora-30B-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
62
  ```
63
 
64
- The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
65
 
66
  If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can instead try the CUDA branch:
67
  ```
 
68
  git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
69
  cd GPTQ-for-LLaMa
70
  python setup_cuda.py install --force
 
61
  python server.py --model gpt4-alpaca-lora-30B-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
62
  ```
63
 
64
+ The above commands assume you have installed all dependencies for `GPTQ-for-LLaMa` and `text-generation-webui`. Please see their respective repositories for further information.
65
 
66
  If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can instead try the CUDA branch:
67
  ```
68
+ pip uninstall -y quant_cuda
69
  git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
70
  cd GPTQ-for-LLaMa
71
  python setup_cuda.py install --force