Add k-quant GGMLs which are now working thanks to LostRuins PR
Browse files
README.md
CHANGED
@@ -77,14 +77,19 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
77 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
78 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
79 |
| wizardlm-13b-v1.1.ggmlv3.q2_K.bin | q2_K | 2 | 5.67 GB| 8.17 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
80 |
-
| wizardlm-13b-v1.1.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.80 GB| 8.30 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
81 |
-
| wizardlm-13b-v1.1.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.46 GB| 8.96 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
82 |
| wizardlm-13b-v1.1.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 7.07 GB| 9.57 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
83 |
-
| wizardlm-13b-v1.1.ggmlv3.
|
|
|
|
|
|
|
84 |
| wizardlm-13b-v1.1.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.99 GB| 10.49 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
85 |
-
| wizardlm-13b-v1.1.ggmlv3.
|
|
|
|
|
86 |
| wizardlm-13b-v1.1.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.33 GB| 11.83 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
|
|
87 |
| wizardlm-13b-v1.1.ggmlv3.q6_K.bin | q6_K | 6 | 10.76 GB| 13.26 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
|
|
|
88 |
|
89 |
|
90 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
|
|
77 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
78 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
79 |
| wizardlm-13b-v1.1.ggmlv3.q2_K.bin | q2_K | 2 | 5.67 GB| 8.17 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
|
|
|
|
80 |
| wizardlm-13b-v1.1.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 7.07 GB| 9.57 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
81 |
+
| wizardlm-13b-v1.1.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.46 GB| 8.96 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
82 |
+
| wizardlm-13b-v1.1.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.80 GB| 8.30 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
83 |
+
| wizardlm-13b-v1.1.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
|
84 |
+
| wizardlm-13b-v1.1.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
85 |
| wizardlm-13b-v1.1.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.99 GB| 10.49 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
86 |
+
| wizardlm-13b-v1.1.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.49 GB| 9.99 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
87 |
+
| wizardlm-13b-v1.1.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
88 |
+
| wizardlm-13b-v1.1.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
89 |
| wizardlm-13b-v1.1.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.33 GB| 11.83 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
90 |
+
| wizardlm-13b-v1.1.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 9.07 GB| 11.57 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
91 |
| wizardlm-13b-v1.1.ggmlv3.q6_K.bin | q6_K | 6 | 10.76 GB| 13.26 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
|
92 |
+
| wizardlm-13b-v1.1.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
93 |
|
94 |
|
95 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|