Update README.md
Browse files
README.md
CHANGED
@@ -72,15 +72,15 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
72 |
## Provided files
|
73 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
74 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
75 |
-
| longchat-7b-16k.ggmlv3.q2_K.bin | q2_K | 2 |
|
76 |
-
| longchat-7b-16k.ggmlv3.q3_K_L.bin | q3_K_L | 3 |
|
77 |
-
| longchat-7b-16k.ggmlv3.q3_K_M.bin | q3_K_M | 3 |
|
78 |
-
| longchat-7b-16k.ggmlv3.q3_K_S.bin | q3_K_S | 3 |
|
79 |
-
| longchat-7b-16k.ggmlv3.q4_K_M.bin | q4_K_M | 4 |
|
80 |
-
| longchat-7b-16k.ggmlv3.q4_K_S.bin | q4_K_S | 4 |
|
81 |
-
| longchat-7b-16k.ggmlv3.q5_K_M.bin | q5_K_M | 5 |
|
82 |
-
| longchat-7b-16k.ggmlv3.q5_K_S.bin | q5_K_S | 5 |
|
83 |
-
| longchat-7b-16k.ggmlv3.q6_K.bin | q6_K | 6 |
|
84 |
|
85 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
86 |
|
|
|
72 |
## Provided files
|
73 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
74 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
75 |
+
| longchat-7b-16k.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
76 |
+
| longchat-7b-16k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
77 |
+
| longchat-7b-16k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
78 |
+
| longchat-7b-16k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
79 |
+
| longchat-7b-16k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
80 |
+
| longchat-7b-16k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
81 |
+
| longchat-7b-16k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
82 |
+
| longchat-7b-16k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
83 |
+
| longchat-7b-16k.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
|
84 |
|
85 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
86 |
|