Create/update model card (README.md)
Browse files
README.md
CHANGED
@@ -34,13 +34,13 @@ brew install llama.cpp # For macOS/Linux
|
|
34 |
**CLI:**
|
35 |
|
36 |
```bash
|
37 |
-
llama-cli --hf-repo ruslanmv/Medical-Llama3-v2-Q4_K_M-GGUF --hf-file
|
38 |
```
|
39 |
|
40 |
**Server:**
|
41 |
|
42 |
```bash
|
43 |
-
llama-server --hf-repo ruslanmv/Medical-Llama3-v2-Q4_K_M-GGUF --hf-file
|
44 |
```
|
45 |
|
46 |
For more advanced usage, refer to the [llama.cpp repository](https://github.com/ggerganov/llama.cpp).
|
|
|
34 |
**CLI:**
|
35 |
|
36 |
```bash
|
37 |
+
llama-cli --hf-repo ruslanmv/Medical-Llama3-v2-Q4_K_M-GGUF --hf-file medical-llama3-v2-q4_k_m.gguf -p "Your prompt here"
|
38 |
```
|
39 |
|
40 |
**Server:**
|
41 |
|
42 |
```bash
|
43 |
+
llama-server --hf-repo ruslanmv/Medical-Llama3-v2-Q4_K_M-GGUF --hf-file medical-llama3-v2-q4_k_m.gguf -c 2048
|
44 |
```
|
45 |
|
46 |
For more advanced usage, refer to the [llama.cpp repository](https://github.com/ggerganov/llama.cpp).
|