nisten commited on
Commit
1b861d6
·
verified ·
1 Parent(s): d238293

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -8,8 +8,10 @@ base_model: [deepseek-ai/DeepSeek-Coder-V2-Instruct]
8
  ### While it required custom code to make, it is standard compatible with plain llama.cpp from github or just search nisten in lmstudio
9
 
10
  >[!TIP]
11
- >The following 4bit version is the one I use myself, it gets 17tps on 64 arm cores
12
- >You don't need to consolidates the files anymore, just point llama-cli to the first one and it'll handle the rest fine.
 
 
13
  >Then to run in commandline interactive mode (prompt.txt file is optional) just do:
14
  >```c++
15
  >./llama-cli --temp 0.4 -m deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf -c 32000 -co -cnv -i -f prompt.txt
 
8
  ### While it required custom code to make, it is standard compatible with plain llama.cpp from github or just search nisten in lmstudio
9
 
10
  >[!TIP]
11
+ >The following 4bit version is the one I use myself, it gets 17tps on 64 arm cores.
12
+ >
13
+ >You don't need to consolidate the files anymore, just point llama-cli to the first one and it'll handle the rest fine.
14
+ >
15
  >Then to run in commandline interactive mode (prompt.txt file is optional) just do:
16
  >```c++
17
  >./llama-cli --temp 0.4 -m deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf -c 32000 -co -cnv -i -f prompt.txt