Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,18 @@ base_model: [deepseek-ai/DeepSeek-Coder-V2-Instruct]
|
|
5 |
#### Custom quantizations of deepseek-coder-v2-instruct optimized for cpu inference.
|
6 |
|
7 |
### Theis ones uses GGML TYPE IQ_4_XS in combination with q8_0 so it runs fast with minimal loss and takes advantage of int8 optimizations on most nevwer server cpus.
|
8 |
-
### While it
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
```verilog
|
11 |
deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf
|
|
|
5 |
#### Custom quantizations of deepseek-coder-v2-instruct optimized for cpu inference.
|
6 |
|
7 |
### Theis ones uses GGML TYPE IQ_4_XS in combination with q8_0 so it runs fast with minimal loss and takes advantage of int8 optimizations on most nevwer server cpus.
|
8 |
+
### While it requiremed custom code to make it is standard compatible with plain llama.cpp
|
9 |
+
|
10 |
+
>[!TIP]
|
11 |
+
>The following 4bit version is the one I use myself, it gets 17tps on 64 arm cores
|
12 |
+
>You don't need to consolidates the files anymore, just point llama-cli to the first one and it'll handle the rest fine
|
13 |
+
>Then to run just do
|
14 |
+
>```c++
|
15 |
+
>./llama-cli --temp 0.4 -m deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf -c 32000 -co -cnv -i -f bigprompt.txt
|
16 |
+
>```
|
17 |
+
>
|
18 |
+
|
19 |
+
|
20 |
|
21 |
```verilog
|
22 |
deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf
|