mradermacher
commited on
auto-patch README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,9 @@ datasets:
|
|
5 |
- pankajmathur/WizardLM_Orca
|
6 |
language:
|
7 |
- en
|
|
|
|
|
|
|
8 |
library_name: transformers
|
9 |
quantized_by: mradermacher
|
10 |
---
|
@@ -45,7 +48,6 @@ more details, including on how to concatenate multi-part files.
|
|
45 |
| [PART 1](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-i1-GGUF/resolve/main/falcon-180B-WizardLM_Orca.i1-Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-i1-GGUF/resolve/main/falcon-180B-WizardLM_Orca.i1-Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-i1-GGUF/resolve/main/falcon-180B-WizardLM_Orca.i1-Q5_K_M.gguf.part3of3) | i1-Q5_K_M | 130.1 | |
|
46 |
| [PART 1](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-i1-GGUF/resolve/main/falcon-180B-WizardLM_Orca.i1-Q6_K.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-i1-GGUF/resolve/main/falcon-180B-WizardLM_Orca.i1-Q6_K.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-i1-GGUF/resolve/main/falcon-180B-WizardLM_Orca.i1-Q6_K.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-i1-GGUF/resolve/main/falcon-180B-WizardLM_Orca.i1-Q6_K.gguf.part4of4) | i1-Q6_K | 146.6 | practically like static Q6_K |
|
47 |
|
48 |
-
|
49 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
50 |
types (lower is better):
|
51 |
|
|
|
5 |
- pankajmathur/WizardLM_Orca
|
6 |
language:
|
7 |
- en
|
8 |
+
- de
|
9 |
+
- es
|
10 |
+
- fr
|
11 |
library_name: transformers
|
12 |
quantized_by: mradermacher
|
13 |
---
|
|
|
48 |
| [PART 1](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-i1-GGUF/resolve/main/falcon-180B-WizardLM_Orca.i1-Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-i1-GGUF/resolve/main/falcon-180B-WizardLM_Orca.i1-Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-i1-GGUF/resolve/main/falcon-180B-WizardLM_Orca.i1-Q5_K_M.gguf.part3of3) | i1-Q5_K_M | 130.1 | |
|
49 |
| [PART 1](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-i1-GGUF/resolve/main/falcon-180B-WizardLM_Orca.i1-Q6_K.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-i1-GGUF/resolve/main/falcon-180B-WizardLM_Orca.i1-Q6_K.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-i1-GGUF/resolve/main/falcon-180B-WizardLM_Orca.i1-Q6_K.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-i1-GGUF/resolve/main/falcon-180B-WizardLM_Orca.i1-Q6_K.gguf.part4of4) | i1-Q6_K | 146.6 | practically like static Q6_K |
|
50 |
|
|
|
51 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
52 |
types (lower is better):
|
53 |
|