TheBloke commited on
Commit
1f8058b
1 Parent(s): 19e5b8a

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +133 -37
README.md CHANGED
@@ -27,32 +27,28 @@ tags:
27
  </div>
28
  <!-- header end -->
29
 
30
- # Meta's Llama 2 70B GPTQ
31
 
32
- These files are GPTQ model files for [Meta's Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf).
33
 
34
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
35
 
36
  Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware for these quantisations!
37
 
38
- ## Required: latest version of Transformers
39
 
40
- Before trying these GPTQs, please update Transformers to the latest Github code:
41
 
42
- ```
43
- pip3 install git+https://github.com/huggingface/transformers
44
- ```
45
 
46
- If using a UI like text-generation-webui, make sure to do this in the Python environment of text-generation-webui.
47
 
48
- ## Repositories available
49
 
50
- * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-GPTQ)
51
- * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-70b-hf)
52
 
53
- ## Required: latest version of Transformers
54
 
55
- Before trying these GPTQs, please update Transformers to the latest Github code:
56
 
57
  ```
58
  pip3 install git+https://github.com/huggingface/transformers
@@ -60,6 +56,12 @@ pip3 install git+https://github.com/huggingface/transformers
60
 
61
  If using a UI like text-generation-webui, make sure to do this in the Python environment of text-generation-webui.
62
 
 
 
 
 
 
 
63
  ## Prompt template: None
64
 
65
  ```
@@ -74,14 +76,14 @@ Each separate quant is in a different branch. See below for instructions on fet
74
 
75
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
76
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
77
- | main | 4 | None | True | 35.33 GB | False | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
78
- | gptq-4bit-32g-actorder_True | 4 | 32 | True | Still processing | False | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
79
  | gptq-4bit-64g-actorder_True | 4 | 64 | True | 37.99 GB | False | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
80
  | gptq-4bit-128g-actorder_True | 4 | 128 | True | 36.65 GB | False | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
81
- | gptq-3bit--1g-actorder_True | 3 | None | True | Still processing | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
82
- | gptq-3bit-128g-actorder_False | 3 | 128 | False | Still processing | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
83
- | gptq-3bit-128g-actorder_True | 3 | 128 | True | Still processing | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
84
- | gptq-3bit-64g-actorder_True | 3 | 64 | True | Still processing | False | AutoGPTQ | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. Poor AutoGPTQ CUDA speed. |
85
 
86
  ## How to download from branches
87
 
@@ -92,30 +94,44 @@ git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/L
92
  ```
93
  - In Python Transformers code, the branch is the `revision` parameter; see below.
94
 
95
- ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
96
 
97
- Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
98
 
99
  It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
100
 
101
- Note: ExLlama is not currently compatible with Llama 2 70B. Please try GPTQ-for-LLaMa, or AutoGPTQ.
 
 
 
 
102
 
103
- Remember to update Transformers to latest Github version before trying to use this model:
104
 
 
 
 
 
105
  ```
106
- pip3 install git+https://github.com/huggingface/transformers
 
107
  ```
 
 
 
 
 
108
 
109
  1. Click the **Model tab**.
110
  2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-70B-GPTQ`.
111
- - To download from a specific branch, enter for example `TheBloke/Llama-2-70B-GPTQ:gptq-4bit-128g-actorder_True`
112
  - see Provided Files above for the list of branches for each option.
113
  3. Click **Download**.
114
  4. The model will start downloading. Once it's finished it will say "Done"
115
- 5. Set Loader to AutoGPTQ or GPTQ-for-LLaMA
116
  - If you use AutoGPTQ, make sure "No inject fused attention" is ticked
117
  6. In the top left, click the refresh icon next to **Model**.
118
- 7. In the **Model** dropdown, choose the model you just downloaded: `Llama-2-70B-chat-GPTQ`
119
  8. The model will automatically load, and is now ready for use!
120
  9. Then click **Save settings for this model** followed by **Reload the Model** in the top right to make sure your settings are persisted.
121
  10. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
@@ -124,14 +140,17 @@ pip3 install git+https://github.com/huggingface/transformers
124
 
125
  First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
126
 
127
- `GITHUB_ACTIONS=true pip install auto-gptq`
 
 
 
 
128
 
129
- And update Transformers to the latest version:
130
  ```
131
  pip3 install git+https://github.com/huggingface/transformers
132
  ```
133
 
134
- **Note**: you must set `inject_fused_attention=False` for Llama 2 70B models; see below.
135
 
136
  Then try the following example code:
137
 
@@ -140,17 +159,17 @@ from transformers import AutoTokenizer, pipeline, logging
140
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
141
 
142
  model_name_or_path = "TheBloke/Llama-2-70B-GPTQ"
143
- model_basename = "gptq_model-4bit--1g"
144
 
145
  use_triton = False
146
 
147
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
148
 
149
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
150
- model_basename=model_basename
151
- inject_fused_attention=False,
152
  use_safetensors=True,
153
- trust_remote_code=True,
154
  device="cuda:0",
155
  use_triton=use_triton,
156
  quantize_config=None)
@@ -160,10 +179,10 @@ To download from a specific branch, use the revision parameter, as in this examp
160
 
161
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
162
  revision="gptq-4bit-32g-actorder_True",
163
- inject_fused_attention=False,
164
  model_basename=model_basename,
 
165
  use_safetensors=True,
166
- trust_remote_code=True,
167
  device="cuda:0",
168
  quantize_config=None)
169
  """
@@ -201,7 +220,84 @@ print(pipe(prompt_template)[0]['generated_text'])
201
 
202
  The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
203
 
204
- ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
205
 
206
  <!-- footer start -->
207
  ## Discord
 
27
  </div>
28
  <!-- header end -->
29
 
30
+ # Meta's Llama 2 70B Chat GPTQ
31
 
32
+ These files are GPTQ model files for [Meta's Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-hf).
33
 
34
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
35
 
36
  Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware for these quantisations!
37
 
38
+ ## ExLlama support for 70B is here!
39
 
40
+ As of [this commit](https://github.com/turboderp/exllama/commit/b3aea521859b83cfd889c4c00c05a323313b7fee), ExLlama has support for Llama 2 70B models.
41
 
42
+ Please make sure you update ExLlama to the latest version. If you are a text-generation-webui one-click user, you must first uninstall the ExLlama wheel, then update ExLlama in `text-generation-webui/repositories`; full instructions are below.
 
 
43
 
44
+ Now that we have ExLlama, that is the recommended loader to use for these models, as performance should be better than with AutoGPTQ and GPTQ-for-LLaMa, and you will be able to use the higher accuracy models, eg 128g + Act-Order.
45
 
46
+ Reminder: ExLlama does not support 3-bit models, so if you wish to try those quants, you will need to use AutoGPTQ or GPTQ-for-LLaMa.
47
 
 
 
48
 
49
+ ## AutoGPTQ and GPTQ-for-LLaMa requires latest version of Transformers
50
 
51
+ If you plan to use any of these quants with AutoGPTQ or GPTQ-for-LLaMa, you will need to update Transformers to the latest Github code:
52
 
53
  ```
54
  pip3 install git+https://github.com/huggingface/transformers
 
56
 
57
  If using a UI like text-generation-webui, make sure to do this in the Python environment of text-generation-webui.
58
 
59
+
60
+ ## Repositories available
61
+
62
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-GPTQ)
63
+ * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Llama-2-70B-fp16)
64
+
65
  ## Prompt template: None
66
 
67
  ```
 
76
 
77
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
78
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
79
+ | main | 4 | 128 | False | 35332232264.00 GB | False | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
80
+ | gptq-4bit-32g-actorder_True | 4 | 32 | True | 40.66 GB | False | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
81
  | gptq-4bit-64g-actorder_True | 4 | 64 | True | 37.99 GB | False | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
82
  | gptq-4bit-128g-actorder_True | 4 | 128 | True | 36.65 GB | False | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
83
+ | gptq-3bit--1g-actorder_True | 3 | None | True | 26.78 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
84
+ | gptq-3bit-128g-actorder_False | 3 | 128 | False | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
85
+ | gptq-3bit-128g-actorder_True | 3 | 128 | True | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
86
+ | gptq-3bit-64g-actorder_True | 3 | 64 | True | 29.30 GB | False | AutoGPTQ | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. Poor AutoGPTQ CUDA speed. |
87
 
88
  ## How to download from branches
89
 
 
94
  ```
95
  - In Python Transformers code, the branch is the `revision` parameter; see below.
96
 
97
+ ### How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
98
 
99
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui), which includes support for Llama 2 models.
100
 
101
  It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
102
 
103
+ ### Use ExLlama (4-bit models only) - recommended option if you have enough VRAM for 4-bit
104
+
105
+ ExLlama has now been updated to support Llama 2 70B, but you will need to update ExLlama to the latest version.
106
+
107
+ By default text-generation-webui installs a pre-compiled wheel for ExLlama. Until text-generation-webui updates to reflect the ExLlama changes - which hopefully won't be long - you must uninstall that and then clone ExLlama into the `text-generation-webui/repositories` directory. ExLlama will then compile its kernel on model load.
108
 
109
+ Note that this requires that your system is capable of compiling CUDA extensions, which may be an issue on Windows.
110
 
111
+ Instructions for Linux One Click Installer:
112
+
113
+ 1. Change directory into the text-generation-webui main folder: `cd /path/to/text-generation-webui`
114
+ 2. Activate the conda env of text-generation-webui:
115
  ```
116
+ source "installer_files/conda/etc/profile.d/conda.sh"
117
+ conda activate installer_files/env
118
  ```
119
+ 3. Run: `pip3 uninstall exllama`
120
+ 4. Run: `cd repositories/exllama` followed by `git pull` to update exllama.
121
+ 6. Now launch text-generation-webui and follow the instructions below for downloading and running the model. ExLlama should build its kernel when the model first loads.
122
+
123
+ ### Downloading and running the model in text-generation-webui
124
 
125
  1. Click the **Model tab**.
126
  2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-70B-GPTQ`.
127
+ - To download from a specific branch, enter for example `TheBloke/Llama-2-70B-GPTQ:gptq-4bit-32g-actorder_True`
128
  - see Provided Files above for the list of branches for each option.
129
  3. Click **Download**.
130
  4. The model will start downloading. Once it's finished it will say "Done"
131
+ 5. Set Loader to ExLlama if you plan to use a 4-bit file, or else choose AutoGPTQ or GPTQ-for-LLaMA.
132
  - If you use AutoGPTQ, make sure "No inject fused attention" is ticked
133
  6. In the top left, click the refresh icon next to **Model**.
134
+ 7. In the **Model** dropdown, choose the model you just downloaded: `TheBloke/Llama-2-70B-GPTQ`
135
  8. The model will automatically load, and is now ready for use!
136
  9. Then click **Save settings for this model** followed by **Reload the Model** in the top right to make sure your settings are persisted.
137
  10. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
 
140
 
141
  First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
142
 
143
+ ```
144
+ GITHUB_ACTIONS=true pip3 install auto-gptq
145
+ ```
146
+
147
+ You also need the latest Transformers code from Github:
148
 
 
149
  ```
150
  pip3 install git+https://github.com/huggingface/transformers
151
  ```
152
 
153
+ You must set `inject_fused_attention=False` as shown below.
154
 
155
  Then try the following example code:
156
 
 
159
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
160
 
161
  model_name_or_path = "TheBloke/Llama-2-70B-GPTQ"
162
+ model_basename = "gptq_model-4bit-128g"
163
 
164
  use_triton = False
165
 
166
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
167
 
168
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
169
+ model_basename=model_basename,
170
+ inject_fused_attention=False, # Required for Llama 2 70B model at this time.
171
  use_safetensors=True,
172
+ trust_remote_code=False,
173
  device="cuda:0",
174
  use_triton=use_triton,
175
  quantize_config=None)
 
179
 
180
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
181
  revision="gptq-4bit-32g-actorder_True",
 
182
  model_basename=model_basename,
183
+ inject_fused_attention=False, # Required for Llama 2 70B model at this time.
184
  use_safetensors=True,
185
+ trust_remote_code=False,
186
  device="cuda:0",
187
  quantize_config=None)
188
  """
 
220
 
221
  The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
222
 
223
+ ExLlama is now compatible with Llama 2 70B models, as of [this commit](https://github.com/turboderp/exllama/commit/b3aea521859b83cfd889c4c00c05a323313b7fee).
224
+
225
+ Please see the Provided Files table above for per-file compatibility.
226
+
227
+ <!-- footer start -->
228
+ ## Discord
229
+
230
+ For further support, and discussions on these models and AI in general, join us at:
231
+
232
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
233
+
234
+ ## Thanks, and how to contribute.
235
+
236
+ Thanks to the [chirper.ai](https://chirper.ai) team!
237
+
238
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
239
+
240
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
241
+
242
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
243
+
244
+ * Patreon: https://patreon.com/TheBlokeAI
245
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
246
+
247
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
248
+
249
+ **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
250
+
251
+ Thank you to all my generous patrons and donaters!
252
+
253
+ <!-- footer end -->
254
+
255
+ # Original model card: Meta's Llama 2 70B Chat
256
+
257
+
258
+ <!-- header start -->
259
+ <div style="width: 100%;">
260
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
261
+ </div>
262
+ <div style="display: flex; justify-content: space-between; width: 100%;">
263
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
264
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
265
+ </div>
266
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
267
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
268
+ </div>
269
+ </div>
270
+ <!-- header end -->
271
+
272
+ # Meta's Llama 2 70B fp16
273
+
274
+ These files are fp16 format model files for [Meta's Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf).
275
+
276
+ They were produced by downloading the PTH files from Meta, and then converting to HF format using the latest Transformers 4.32.0.dev0, from Git, with the Llama 2 PR included: https://github.com/huggingface/transformers/pull/24891.
277
+
278
+ Command to convert was:
279
+ ```
280
+ python3 /workspace/venv/pytorch2/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir /workspace/git/llama/download --model_size 70B --output_dir /workspace/process/llama-2-70b-chat/source --safe_serialization true
281
+ ```
282
+
283
+ The files were saved in Safetensors format.
284
+
285
+ I am uploading this repo because I initially tried to create GPTQs using the [MetaLlama 2 70B HF repo](https://huggingface.co/meta-llama/Llama-2-70b-hf), but got strange errors that suggested the weights were not correct. But converting from the PTH files using the latest `convert_llama_weights_to_hf.py` script worked fine.
286
+
287
+
288
+ Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware for merging and uploading these files!
289
+
290
+ ## Repositories available
291
+
292
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-GPTQ)
293
+ * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-70b-hf)
294
+ * [My fp16 conversion of the unquantised PTH model files](https://huggingface.co/TheBloke/Llama-2-70B-fp16)
295
+
296
+ ## Prompt template: None
297
+
298
+ ```
299
+ {prompt}
300
+ ```
301
 
302
  <!-- footer start -->
303
  ## Discord