File size: 3,837 Bytes
7f04701 ce1c71f 7f04701 4ad5e94 829ad76 1b861d6 3866a8a 829ad76 3866a8a 829ad76 7f04701 54f1a01 7f04701 b94d1cd 4ad5e94 b94d1cd 4ad5e94 b94d1cd 479848a 0e04b37 4ad5e94 479848a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
---
base_model: [deepseek-ai/DeepSeek-Coder-V2-Instruct]
---
#### Custom quantizations of deepseek-coder-v2-instruct optimized for cpu inference.
### This iq4xm one uses GGML TYPE IQ_4_XS 4bit in combination with q8_0 bit so it runs fast with minimal loss and takes advantage of int8 optimizations on most newer server cpus.
### While it required custom code to make, it is compatible with standard llama.cpp from github or just search nisten in lmstudio.
>[!TIP]
>The following 4bit version is the one I use myself, it gets 17tps on 64 arm cores.
>
>You don't need to consolidate the files anymore, just point llama-cli to the first one and it'll handle the rest fine.
>
>Then to run in commandline interactive mode (prompt.txt file is optional) just do:
>```c++
>./llama-cli --temp 0.4 -m deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf -c 32000 -co -cnv -i -f prompt.txt
>```
>
```verilog
deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf
deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf
deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf
deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf
```
>[!TIP]
>### To download the models MUCH faster on linux apt install aria2, on mac: brew install aria2
>
```verilog
sudo apt install -y aria2
aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00002-of-00004.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00003-of-00004.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_iq4xm.gguf-00004-of-00004.gguf
```
>[!TIP]
> ### And for downloading the Q8_0 version converted in the most lossless way possible from hf bf16 download these:
>
```verilog
aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00001-of-00006.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00001-of-00006.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00002-of-00006.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00002-of-00006.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00003-of-00006.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00003-of-00006.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00004-of-00006.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00004-of-00006.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00005-of-00006.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00005-of-00006.gguf
aria2c -x 8 -o deepseek_coder_v2_cpu_q8_0-00006-of-00006.gguf \
https://huggingface.co/nisten/deepseek-coder-v2-inst-cpu-optimized-gguf/resolve/main/deepseek_coder_v2_cpu_q8_0-00006-of-00006.gguf
```
The use of DeepSeek-Coder-V2 Base/Instruct models is subject to [the Model License](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-MODEL). DeepSeek-Coder-V2 series (including Base and Instruct) supports commercial use. It's a permissive license that only restrict use for military purposes, harming minors or patent trolling.
Enjoy and remember to accelerate!
-Nisten |