English
llama
text-generation-inference
asedmammad commited on
Commit
f279abd
·
1 Parent(s): 64d2e5f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -6,10 +6,10 @@ tags:
6
  - text-generation-inference
7
  ---
8
 
9
- # acrastt's EverythingLM 3B GGML
10
 
11
 
12
- These files are GGML format model files for [acrastt's EverythingLM 3B GGML](https://huggingface.co/acrastt/EverythingLM-3B).
13
 
14
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
15
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
@@ -22,8 +22,9 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
22
 
23
  I use the following command line; adjust for your tastes and needs:
24
 
 
25
  ```
26
- ./main -t 8 -ngl 26 -m EverythingLM-3B.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "prompt goes here"
27
  ```
28
  Change `-t 8` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
29
 
 
6
  - text-generation-inference
7
  ---
8
 
9
+ # acrastt's Marx 3B GGML
10
 
11
 
12
+ These files are GGML format model files for [acrastt's Marx 3B GGML](https://huggingface.co/acrastt/Marx-3B).
13
 
14
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
15
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
 
22
 
23
  I use the following command line; adjust for your tastes and needs:
24
 
25
+
26
  ```
27
+ ./main -t 8 -ngl 26 -m Marx-3B.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "prompt goes here"
28
  ```
29
  Change `-t 8` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
30