auto-patch README.md
Browse files
README.md
CHANGED
@@ -1 +1,58 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: 01-ai/Yi-34B
|
3 |
+
datasets:
|
4 |
+
- teknium/OpenHermes-2.5
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
library_name: transformers
|
8 |
+
license: apache-2.0
|
9 |
+
quantized_by: mradermacher
|
10 |
+
tags:
|
11 |
+
- yi
|
12 |
+
- instruct
|
13 |
+
- finetune
|
14 |
+
- chatml
|
15 |
+
- gpt4
|
16 |
+
- synthetic data
|
17 |
+
- distillation
|
18 |
+
---
|
19 |
+
## About
|
20 |
+
|
21 |
+
weighted/imatrix quants of https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B
|
22 |
+
|
23 |
+
<!-- provided-files -->
|
24 |
+
## Usage
|
25 |
+
|
26 |
+
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
27 |
+
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
|
28 |
+
more details, including on how to concatenate multi-part files.
|
29 |
+
|
30 |
+
## Provided Quants
|
31 |
+
|
32 |
+
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
|
33 |
+
|
34 |
+
| Link | Type | Size/GB | Notes |
|
35 |
+
|:-----|:-----|--------:|:------|
|
36 |
+
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Yi-34B-i1-GGUF/resolve/main/Nous-Hermes-2-Yi-34B.i1-IQ2_M.gguf) | i1-IQ2_M | 12.5 | |
|
37 |
+
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Yi-34B-i1-GGUF/resolve/main/Nous-Hermes-2-Yi-34B.i1-Q2_K.gguf) | i1-Q2_K | 13.5 | IQ3_XXS probably better |
|
38 |
+
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Yi-34B-i1-GGUF/resolve/main/Nous-Hermes-2-Yi-34B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.0 | fast, lower quality |
|
39 |
+
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Yi-34B-i1-GGUF/resolve/main/Nous-Hermes-2-Yi-34B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.8 | |
|
40 |
+
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Yi-34B-i1-GGUF/resolve/main/Nous-Hermes-2-Yi-34B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.6 | IQ3_XS probably better |
|
41 |
+
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Yi-34B-i1-GGUF/resolve/main/Nous-Hermes-2-Yi-34B.i1-IQ3_S.gguf) | i1-IQ3_S | 15.7 | fast, beats Q3_K* |
|
42 |
+
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Yi-34B-i1-GGUF/resolve/main/Nous-Hermes-2-Yi-34B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.3 | IQ3_S probably better |
|
43 |
+
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Yi-34B-i1-GGUF/resolve/main/Nous-Hermes-2-Yi-34B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.8 | IQ3_M probably better |
|
44 |
+
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Yi-34B-i1-GGUF/resolve/main/Nous-Hermes-2-Yi-34B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.2 | almost as good as Q4_K_M |
|
45 |
+
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Yi-34B-i1-GGUF/resolve/main/Nous-Hermes-2-Yi-34B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.3 | fast, medium quality |
|
46 |
+
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Yi-34B-i1-GGUF/resolve/main/Nous-Hermes-2-Yi-34B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.3 | |
|
47 |
+
| [GGUF](https://huggingface.co/mradermacher/Nous-Hermes-2-Yi-34B-i1-GGUF/resolve/main/Nous-Hermes-2-Yi-34B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.0 | best weighted quant |
|
48 |
+
|
49 |
+
|
50 |
+
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
51 |
+
types (lower is better):
|
52 |
+
|
53 |
+

|
54 |
+
|
55 |
+
And here are Artefact2's thoughts on the matter:
|
56 |
+
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
|
57 |
+
|
58 |
+
<!-- end -->
|