Update README.md
Browse files
README.md
CHANGED
@@ -9,10 +9,10 @@ base_model: [deepseek-ai/DeepSeek-Coder-V2-Instruct]
|
|
9 |
|
10 |
>[!TIP]
|
11 |
>The following 4bit version is the one I use myself, it gets 17tps on 64 arm cores
|
12 |
-
>You don't need to consolidates the files anymore, just point llama-cli to the first one and it'll handle the rest fine
|
13 |
-
>Then to run just do
|
14 |
>```c++
|
15 |
-
>./llama-cli --temp 0.4 -m deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf -c 32000 -co -cnv -i -f
|
16 |
>```
|
17 |
>
|
18 |
|
|
|
9 |
|
10 |
>[!TIP]
|
11 |
>The following 4bit version is the one I use myself, it gets 17tps on 64 arm cores
|
12 |
+
>You don't need to consolidates the files anymore, just point llama-cli to the first one and it'll handle the rest fine.
|
13 |
+
>Then to run in commandline interactive mode (prompt.txt file is optional) just do:
|
14 |
>```c++
|
15 |
+
>./llama-cli --temp 0.4 -m deepseek_coder_v2_cpu_iq4xm.gguf-00001-of-00004.gguf -c 32000 -co -cnv -i -f prompt.txt
|
16 |
>```
|
17 |
>
|
18 |
|