sal076 commited on
Commit
db8f139
1 Parent(s): 7c7c84c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -5
README.md CHANGED
@@ -2,7 +2,7 @@
2
  base_model: sal076/L3.1_RP_TEST3
3
  language:
4
  - en
5
- license: apache-2.0
6
  tags:
7
  - text-generation-inference
8
  - transformers
@@ -11,12 +11,10 @@ tags:
11
  - trl
12
  - sft
13
  - llama-cpp
14
- - gguf-my-repo
15
  ---
16
 
17
  # sal076/L3.1_RP_TEST3-Q4_K_M-GGUF
18
- This model was converted to GGUF format from [`sal076/L3.1_RP_TEST3`](https://huggingface.co/sal076/L3.1_RP_TEST3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
19
- Refer to the [original model card](https://huggingface.co/sal076/L3.1_RP_TEST3) for more details on the model.
20
 
21
  ## Use with llama.cpp
22
  Install llama.cpp through brew (works on Mac and Linux)
@@ -56,4 +54,4 @@ Step 3: Run inference through the main binary.
56
  or
57
  ```
58
  ./llama-server --hf-repo sal076/L3.1_RP_TEST3-Q4_K_M-GGUF --hf-file l3.1_rp_test3-q4_k_m.gguf -c 2048
59
- ```
 
2
  base_model: sal076/L3.1_RP_TEST3
3
  language:
4
  - en
5
+ license: llama3.1
6
  tags:
7
  - text-generation-inference
8
  - transformers
 
11
  - trl
12
  - sft
13
  - llama-cpp
 
14
  ---
15
 
16
  # sal076/L3.1_RP_TEST3-Q4_K_M-GGUF
17
+ This model Is a (Hopefully) better version then my last model
 
18
 
19
  ## Use with llama.cpp
20
  Install llama.cpp through brew (works on Mac and Linux)
 
54
  or
55
  ```
56
  ./llama-server --hf-repo sal076/L3.1_RP_TEST3-Q4_K_M-GGUF --hf-file l3.1_rp_test3-q4_k_m.gguf -c 2048
57
+ ```