reach-vb HF staff commited on
Commit
21c643b
1 Parent(s): ea43547

Update README.md (#1)

Browse files

- Update README.md (58c8355f6b03ea76559d3d9425c823f538744f0a)

Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -19,7 +19,7 @@ model-index:
19
  results: []
20
  ---
21
 
22
- # reach-vb/smollm-1.7B-instruct-add-basics-Q8_0-GGUF
23
  This model was converted to GGUF format from [`HuggingFaceTB/smollm-1.7B-instruct-add-basics`](https://huggingface.co/HuggingFaceTB/smollm-1.7B-instruct-add-basics) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
24
  Refer to the [original model card](https://huggingface.co/HuggingFaceTB/smollm-1.7B-instruct-add-basics) for more details on the model.
25
 
@@ -34,12 +34,12 @@ Invoke the llama.cpp server or the CLI.
34
 
35
  ### CLI:
36
  ```bash
37
- llama-cli --hf-repo reach-vb/smollm-1.7B-instruct-add-basics-Q8_0-GGUF --hf-file smollm-1.7b-instruct-add-basics-q8_0.gguf -p "The meaning to life and the universe is"
38
  ```
39
 
40
  ### Server:
41
  ```bash
42
- llama-server --hf-repo reach-vb/smollm-1.7B-instruct-add-basics-Q8_0-GGUF --hf-file smollm-1.7b-instruct-add-basics-q8_0.gguf -c 2048
43
  ```
44
 
45
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
@@ -56,9 +56,9 @@ cd llama.cpp && LLAMA_CURL=1 make
56
 
57
  Step 3: Run inference through the main binary.
58
  ```
59
- ./llama-cli --hf-repo reach-vb/smollm-1.7B-instruct-add-basics-Q8_0-GGUF --hf-file smollm-1.7b-instruct-add-basics-q8_0.gguf -p "The meaning to life and the universe is"
60
  ```
61
  or
62
  ```
63
- ./llama-server --hf-repo reach-vb/smollm-1.7B-instruct-add-basics-Q8_0-GGUF --hf-file smollm-1.7b-instruct-add-basics-q8_0.gguf -c 2048
64
  ```
 
19
  results: []
20
  ---
21
 
22
+ # smollm-1.7B-instruct-add-basics-Q8_0-GGUF
23
  This model was converted to GGUF format from [`HuggingFaceTB/smollm-1.7B-instruct-add-basics`](https://huggingface.co/HuggingFaceTB/smollm-1.7B-instruct-add-basics) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
24
  Refer to the [original model card](https://huggingface.co/HuggingFaceTB/smollm-1.7B-instruct-add-basics) for more details on the model.
25
 
 
34
 
35
  ### CLI:
36
  ```bash
37
+ llama-cli --hf-repo HuggingFaceTB/smollm-1.7B-instruct-add-basics-Q8_0-GGUF --hf-file smollm-1.7b-instruct-add-basics-q8_0.gguf -p "The meaning to life and the universe is"
38
  ```
39
 
40
  ### Server:
41
  ```bash
42
+ llama-server --hf-repo HuggingFaceTB/smollm-1.7B-instruct-add-basics-Q8_0-GGUF --hf-file smollm-1.7b-instruct-add-basics-q8_0.gguf -c 2048
43
  ```
44
 
45
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
56
 
57
  Step 3: Run inference through the main binary.
58
  ```
59
+ ./llama-cli --hf-repo HuggingFaceTB/smollm-1.7B-instruct-add-basics-Q8_0-GGUF --hf-file smollm-1.7b-instruct-add-basics-q8_0.gguf -p "The meaning to life and the universe is"
60
  ```
61
  or
62
  ```
63
+ ./llama-server --hf-repo HuggingFaceTB/smollm-1.7B-instruct-add-basics-Q8_0-GGUF --hf-file smollm-1.7b-instruct-add-basics-q8_0.gguf -c 2048
64
  ```