asedmammad
commited on
Commit
•
0e233b3
1
Parent(s):
ab04198
Update README.md
Browse files
README.md
CHANGED
@@ -23,11 +23,11 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
23 |
I use the following command line; adjust for your tastes and needs:
|
24 |
|
25 |
```
|
26 |
-
./main -t 8 -ngl
|
27 |
```
|
28 |
Change `-t 8` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
29 |
|
30 |
-
Change `-ngl
|
31 |
|
32 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`, also you can use --interactive-first to also start in interactive mode.
|
33 |
|
|
|
23 |
I use the following command line; adjust for your tastes and needs:
|
24 |
|
25 |
```
|
26 |
+
./main -t 8 -ngl 26 -m EverythingLM-3B.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "prompt goes here"
|
27 |
```
|
28 |
Change `-t 8` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
29 |
|
30 |
+
Change `-ngl 26` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
31 |
|
32 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`, also you can use --interactive-first to also start in interactive mode.
|
33 |
|