Update README.md
Browse files
README.md
CHANGED
@@ -35,7 +35,7 @@ Once the model is placed inside the directory, run the `llama-cpp` server from i
|
|
35 |
|
36 |
```bash
|
37 |
# For F16 model, update for different quantization accordingly
|
38 |
-
./build/bin/llama-server -m /indri-0.1-124M-tts-F16.gguf --samplers 'top_k
|
39 |
```
|
40 |
|
41 |
Refer [here](https://github.com/ggerganov/llama.cpp/tree/master/examples/main) if you are facing issues in running the llama-server locally.
|
|
|
35 |
|
36 |
```bash
|
37 |
# For F16 model, update for different quantization accordingly
|
38 |
+
./build/bin/llama-server -m /indri-0.1-124M-tts-F16.gguf --samplers 'top_k;temperature' --top_k 15
|
39 |
```
|
40 |
|
41 |
Refer [here](https://github.com/ggerganov/llama.cpp/tree/master/examples/main) if you are facing issues in running the llama-server locally.
|