TheBloke commited on
Commit
7759ced
1 Parent(s): 241e045

Initial GGUF model commit

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -129,7 +129,7 @@ Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6f
129
  For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
130
 
131
  ```
132
- ./main -t 10 -ngl 32 -m model_007-70b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are a story writing assistant.\n\n### User:\nWrite a story about llamas\n\n### Assistant:\n"
133
  ```
134
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
135
 
 
129
  For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
130
 
131
  ```
132
+ ./main -t 10 -ngl 32 -m model_007-70b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are a story writing assistant.\n\n### User:\nWrite a story about llamas\n\n### Assistant:"
133
  ```
134
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
135