Xin Liu
commited on
Commit
•
bf47058
1
Parent(s):
ab9305d
Update
Browse filesSigned-off-by: Xin Liu <[email protected]>
README.md
CHANGED
@@ -1,5 +1,26 @@
|
|
1 |
---
|
|
|
|
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
<!-- header start -->
|
@@ -10,11 +31,11 @@ license: apache-2.0
|
|
10 |
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
11 |
<!-- header end -->
|
12 |
|
13 |
-
#
|
14 |
|
15 |
## Original Model
|
16 |
|
17 |
-
[
|
18 |
|
19 |
## Run with LlamaEdge
|
20 |
|
@@ -22,41 +43,43 @@ license: apache-2.0
|
|
22 |
|
23 |
- Prompt template
|
24 |
|
25 |
-
- Prompt type: `
|
26 |
-
|
27 |
- Prompt string
|
28 |
|
29 |
```text
|
30 |
-
|
|
|
|
|
|
|
|
|
31 |
```
|
32 |
|
33 |
-
- Reverse prompt: `<|end_of_turn|>`
|
34 |
-
|
35 |
- Run as LlamaEdge service
|
36 |
|
37 |
```bash
|
38 |
-
wasmedge --dir .:. --nn-preload default:GGML:AUTO:
|
39 |
```
|
40 |
|
41 |
- Run as LlamaEdge command app
|
42 |
|
43 |
```bash
|
44 |
-
wasmedge --dir .:. --nn-preload default:GGML:AUTO:
|
45 |
```
|
46 |
|
47 |
## Quantized GGUF Models
|
48 |
|
49 |
| Name | Quant method | Bits | Size | Use case |
|
50 |
| ---- | ---- | ---- | ---- | ----- |
|
51 |
-
| [
|
52 |
-
| [
|
53 |
-
| [
|
54 |
-
| [
|
55 |
-
| [
|
56 |
-
| [
|
57 |
-
| [
|
58 |
-
| [
|
59 |
-
| [
|
60 |
-
| [
|
61 |
-
| [
|
62 |
-
| [
|
|
|
1 |
---
|
2 |
+
base_model: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
|
3 |
+
inference: false
|
4 |
license: apache-2.0
|
5 |
+
model-index:
|
6 |
+
- name: Nous-Hermes-2-Mixtral-8x7B-DPO
|
7 |
+
results: []
|
8 |
+
model_creator: NousResearch
|
9 |
+
model_name: Nous Hermes 2 Mixtral 8X7B DPO
|
10 |
+
model_type: mixtral
|
11 |
+
quantized_by: Second State Inc.
|
12 |
+
language:
|
13 |
+
- en
|
14 |
+
tags:
|
15 |
+
- Mixtral
|
16 |
+
- instruct
|
17 |
+
- finetune
|
18 |
+
- chatml
|
19 |
+
- DPO
|
20 |
+
- RLHF
|
21 |
+
- gpt4
|
22 |
+
- synthetic data
|
23 |
+
- distillation
|
24 |
---
|
25 |
|
26 |
<!-- header start -->
|
|
|
31 |
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
32 |
<!-- header end -->
|
33 |
|
34 |
+
# Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
|
35 |
|
36 |
## Original Model
|
37 |
|
38 |
+
[NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
|
39 |
|
40 |
## Run with LlamaEdge
|
41 |
|
|
|
43 |
|
44 |
- Prompt template
|
45 |
|
46 |
+
- Prompt type: `chatml`
|
47 |
+
|
48 |
- Prompt string
|
49 |
|
50 |
```text
|
51 |
+
<|im_start|>system
|
52 |
+
{system_message}<|im_end|>
|
53 |
+
<|im_start|>user
|
54 |
+
{prompt}<|im_end|>
|
55 |
+
<|im_start|>assistant
|
56 |
```
|
57 |
|
|
|
|
|
58 |
- Run as LlamaEdge service
|
59 |
|
60 |
```bash
|
61 |
+
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_M.gguf llama-api-server.wasm -p chatml
|
62 |
```
|
63 |
|
64 |
- Run as LlamaEdge command app
|
65 |
|
66 |
```bash
|
67 |
+
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_M.gguf llama-chat.wasm -p chatml
|
68 |
```
|
69 |
|
70 |
## Quantized GGUF Models
|
71 |
|
72 |
| Name | Quant method | Bits | Size | Use case |
|
73 |
| ---- | ---- | ---- | ---- | ----- |
|
74 |
+
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q2_K.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q2_K.gguf) | Q2_K | 2 | 17.3 GB| smallest, significant quality loss - not recommended for most purposes |
|
75 |
+
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_L.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_L.gguf) | Q3_K_L | 3 | 24.2 GB| small, substantial quality loss |
|
76 |
+
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_M.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_M.gguf) | Q3_K_M | 3 | 22.5 GB| very small, high quality loss |
|
77 |
+
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_S.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_S.gguf) | Q3_K_S | 3 | 20.4 GB| very small, high quality loss |
|
78 |
+
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_0.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_0.gguf) | Q4_0 | 4 | 26.4 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
|
79 |
+
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_K_M.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_K_M.gguf) | Q4_K_M | 4 | 28.4 GB| medium, balanced quality - recommended |
|
80 |
+
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_K_S.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_K_S.gguf) | Q4_K_S | 4 | 26.7 GB| small, greater quality loss |
|
81 |
+
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_0.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_0.gguf) | Q5_0 | 5 | 32.2 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
|
82 |
+
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_M.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_M.gguf) | Q5_K_M | 5 | 33.2 GB| large, very low quality loss - recommended |
|
83 |
+
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_S.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_S.gguf) | Q5_K_S | 5 | 32.2 GB| large, low quality loss - recommended |
|
84 |
+
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q6_K.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q6_K.gguf) | Q6_K | 6 | 38.4 GB| very large, extremely low quality loss |
|
85 |
+
| [Nous-Hermes-2-Mixtral-8x7B-DPO-Q8_0.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q8_0.gguf) | Q8_0 | 8 | 49.6 GB| very large, extremely low quality loss - not recommended |
|