Text Generation
Transformers
English
llama
TheBloke commited on
Commit
a91b260
·
1 Parent(s): 82a3be9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -1
README.md CHANGED
@@ -112,8 +112,33 @@ Refer to the Provided Files table below to see what files use which methods, and
112
  | [synthia-70b.ggmlv3.Q5_0.bin](https://huggingface.co/TheBloke/Synthia-70B-GGML/blob/main/synthia-70b.ggmlv3.Q5_0.bin) | Q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
113
  | [synthia-70b.ggmlv3.Q5_K_S.bin](https://huggingface.co/TheBloke/Synthia-70B-GGML/blob/main/synthia-70b.ggmlv3.Q5_K_S.bin) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
114
  | [synthia-70b.ggmlv3.Q5_K_M.bin](https://huggingface.co/TheBloke/Synthia-70B-GGML/blob/main/synthia-70b.ggmlv3.Q5_K_M.bin) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
 
 
115
 
116
- **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
 
118
  ## How to run in `llama.cpp`
119
 
 
112
  | [synthia-70b.ggmlv3.Q5_0.bin](https://huggingface.co/TheBloke/Synthia-70B-GGML/blob/main/synthia-70b.ggmlv3.Q5_0.bin) | Q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
113
  | [synthia-70b.ggmlv3.Q5_K_S.bin](https://huggingface.co/TheBloke/Synthia-70B-GGML/blob/main/synthia-70b.ggmlv3.Q5_K_S.bin) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
114
  | [synthia-70b.ggmlv3.Q5_K_M.bin](https://huggingface.co/TheBloke/Synthia-70B-GGML/blob/main/synthia-70b.ggmlv3.Q5_K_M.bin) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
115
+ | synthia-70b.ggmlv3.q6_K.bin | q6_K | 6 | 56.59 GB | 59.09 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
116
+ | synthia-70b.ggmlv3.q8_0.bin | q8_0 | 8 | 73.23 GB | 75.73 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
117
 
118
+ ### q6_K and q8_0 files require expansion from archive
119
+
120
+ **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, they are just for storing a .bin file in two parts.
121
+
122
+ <details>
123
+ <summary>Click for instructions regarding q5_1, q6_K and q8_0 files</summary>
124
+
125
+ ### q6_K
126
+ Please download:
127
+ * `synthia-70b.ggmlv3.q6_K.zip`
128
+ * `synthia-70b.ggmlv3.q6_K.z01`
129
+
130
+ ### q8_0
131
+ Please download:
132
+ * `synthia-70b.ggmlv3.q8_0.zip`
133
+ * `synthia-70b.ggmlv3.q8_0.z01`
134
+
135
+ Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
136
+ ```
137
+ sudo apt update -y && sudo apt install 7zip
138
+ 7zz x synthia-70b.ggmlv3.q6_K.zip
139
+ ```
140
+
141
+ </details>
142
 
143
  ## How to run in `llama.cpp`
144