newsletter
commited on
Commit
•
f353ddc
1
Parent(s):
a2ca769
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,8 +1,11 @@
|
|
1 |
---
|
|
|
|
|
|
|
2 |
language:
|
3 |
- en
|
4 |
-
license: apache-2.0
|
5 |
library_name: trl
|
|
|
6 |
tags:
|
7 |
- distilabel
|
8 |
- dpo
|
@@ -10,9 +13,6 @@ tags:
|
|
10 |
- rlhf
|
11 |
- llama-cpp
|
12 |
- gguf-my-repo
|
13 |
-
base_model: teknium/OpenHermes-2.5-Mistral-7B
|
14 |
-
datasets:
|
15 |
-
- argilla/dpo-mix-7k
|
16 |
model-index:
|
17 |
- name: CapybaraHermes-2.5-Mistral-7B
|
18 |
results:
|
@@ -121,29 +121,43 @@ model-index:
|
|
121 |
# newsletter/CapybaraHermes-2.5-Mistral-7B-Q6_K-GGUF
|
122 |
This model was converted to GGUF format from [`argilla/CapybaraHermes-2.5-Mistral-7B`](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
123 |
Refer to the [original model card](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B) for more details on the model.
|
124 |
-
## Use with llama.cpp
|
125 |
|
126 |
-
|
|
|
127 |
|
128 |
```bash
|
129 |
-
brew install
|
|
|
130 |
```
|
131 |
Invoke the llama.cpp server or the CLI.
|
132 |
|
133 |
-
CLI:
|
134 |
-
|
135 |
```bash
|
136 |
-
llama-cli --hf-repo newsletter/CapybaraHermes-2.5-Mistral-7B-Q6_K-GGUF --
|
137 |
```
|
138 |
|
139 |
-
Server:
|
140 |
-
|
141 |
```bash
|
142 |
-
llama-server --hf-repo newsletter/CapybaraHermes-2.5-Mistral-7B-Q6_K-GGUF --
|
143 |
```
|
144 |
|
145 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
146 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
147 |
```
|
148 |
-
|
149 |
```
|
|
|
1 |
---
|
2 |
+
base_model: argilla/CapybaraHermes-2.5-Mistral-7B
|
3 |
+
datasets:
|
4 |
+
- argilla/dpo-mix-7k
|
5 |
language:
|
6 |
- en
|
|
|
7 |
library_name: trl
|
8 |
+
license: apache-2.0
|
9 |
tags:
|
10 |
- distilabel
|
11 |
- dpo
|
|
|
13 |
- rlhf
|
14 |
- llama-cpp
|
15 |
- gguf-my-repo
|
|
|
|
|
|
|
16 |
model-index:
|
17 |
- name: CapybaraHermes-2.5-Mistral-7B
|
18 |
results:
|
|
|
121 |
# newsletter/CapybaraHermes-2.5-Mistral-7B-Q6_K-GGUF
|
122 |
This model was converted to GGUF format from [`argilla/CapybaraHermes-2.5-Mistral-7B`](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
123 |
Refer to the [original model card](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B) for more details on the model.
|
|
|
124 |
|
125 |
+
## Use with llama.cpp
|
126 |
+
Install llama.cpp through brew (works on Mac and Linux)
|
127 |
|
128 |
```bash
|
129 |
+
brew install llama.cpp
|
130 |
+
|
131 |
```
|
132 |
Invoke the llama.cpp server or the CLI.
|
133 |
|
134 |
+
### CLI:
|
|
|
135 |
```bash
|
136 |
+
llama-cli --hf-repo newsletter/CapybaraHermes-2.5-Mistral-7B-Q6_K-GGUF --hf-file capybarahermes-2.5-mistral-7b-q6_k.gguf -p "The meaning to life and the universe is"
|
137 |
```
|
138 |
|
139 |
+
### Server:
|
|
|
140 |
```bash
|
141 |
+
llama-server --hf-repo newsletter/CapybaraHermes-2.5-Mistral-7B-Q6_K-GGUF --hf-file capybarahermes-2.5-mistral-7b-q6_k.gguf -c 2048
|
142 |
```
|
143 |
|
144 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
145 |
|
146 |
+
Step 1: Clone llama.cpp from GitHub.
|
147 |
+
```
|
148 |
+
git clone https://github.com/ggerganov/llama.cpp
|
149 |
+
```
|
150 |
+
|
151 |
+
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
|
152 |
+
```
|
153 |
+
cd llama.cpp && LLAMA_CURL=1 make
|
154 |
+
```
|
155 |
+
|
156 |
+
Step 3: Run inference through the main binary.
|
157 |
+
```
|
158 |
+
./llama-cli --hf-repo newsletter/CapybaraHermes-2.5-Mistral-7B-Q6_K-GGUF --hf-file capybarahermes-2.5-mistral-7b-q6_k.gguf -p "The meaning to life and the universe is"
|
159 |
+
```
|
160 |
+
or
|
161 |
```
|
162 |
+
./llama-server --hf-repo newsletter/CapybaraHermes-2.5-Mistral-7B-Q6_K-GGUF --hf-file capybarahermes-2.5-mistral-7b-q6_k.gguf -c 2048
|
163 |
```
|