File size: 3,181 Bytes
9f8b90c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-14B-Chat/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
quantized_by: bartowski
---
## Exllama v2 Quantizations of Qwen1.5-14B-Chat
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/Qwen/Qwen1.5-14B-Chat/
No GQA - VRAM requirements will be higher
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/Qwen1.5-14B-Chat-exl2/tree/8_0) | 8.0 | 8.0 | 18.5 GB | 28.8 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [6_5](https://huggingface.co/bartowski/Qwen1.5-14B-Chat-exl2/tree/6_5) | 6.5 | 8.0 | 15.9 GB | 26.4 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/bartowski/Qwen1.5-14B-Chat-exl2/tree/5_0) | 5.0 | 6.0 | 13.3 GB | 23.9 GB | Slightly lower perplexity vs 6.5. |
| [4_25](https://huggingface.co/bartowski/Qwen1.5-14B-Chat-exl2/tree/4_25) | 4.25 | 6.0 | 12.0 GB | 22.6 GB | GPTQ equivalent bits per weight, can fit in 12 GB card with lower context. |
| [3_75](https://huggingface.co/bartowski/Qwen1.5-14B-Chat-exl2/tree/3_75) | 3.75 | 6.0 | 11.1 GB | 21.7 GB | Lower quality but still generally usable. |
| [3_0](https://huggingface.co/bartowski/Qwen1.5-14B-Chat-exl2/tree/3_0) | 3.0 | 6.0 | 10.0 GB | 20.6 GB | Very low quality, not recommended unless you have to. |
VRAM requirements listed for both 4k context and 16k context since without GQA the differences are massive (9.6 GB)
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Qwen1.5-14B-Chat-exl2 Qwen1.5-14B-Chat-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Qwen1.5-14B-Chat-exl2`:
```shell
mkdir Qwen1.5-14B-Chat-exl2
huggingface-cli download bartowski/Qwen1.5-14B-Chat-exl2 --local-dir Qwen1.5-14B-Chat-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir Qwen1.5-14B-Chat-exl2-6_5
huggingface-cli download bartowski/Qwen1.5-14B-Chat-exl2 --revision 6_5 --local-dir Qwen1.5-14B-Chat-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir Qwen1.5-14B-Chat-exl2-6.5
huggingface-cli download bartowski/Qwen1.5-14B-Chat-exl2 --revision 6_5 --local-dir Qwen1.5-14B-Chat-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |