File size: 4,846 Bytes
7315aac
a3fd11b
7315aac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a3fd11b
7315aac
f68f768
 
7315aac
 
 
 
 
 
 
 
 
 
 
 
f68f768
2bc9af9
c106666
 
47a8e8e
7315aac
47a8e8e
c830b08
c106666
47a8e8e
c830b08
c106666
47a8e8e
 
a4e0429
 
d6b22ce
c830b08
47a8e8e
d6b22ce
c7b42da
7315aac
 
 
 
 
 
 
 
 
2be5764
 
 
 
 
9a35155
 
 
 
 
 
7315aac
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
base_model: SteelStorage/Umbra-v2.1-MoE-4x10.7
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- merge
- mergekit
- Solar Moe
- Solar
- Umbra
---
## About

weighted/imatrix quants of https://huggingface.co/SteelStorage/Umbra-v2.1-MoE-4x10.7

<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-GGUF
## Usage

If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.

## Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-IQ1_S.gguf) | i1-IQ1_S | 7.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-IQ1_M.gguf) | i1-IQ1_M | 8.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.8 |  |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.9 |  |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-IQ2_S.gguf) | i1-IQ2_S | 11.1 |  |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-IQ2_M.gguf) | i1-IQ2_M | 12.2 |  |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-Q2_K.gguf) | i1-Q2_K | 13.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.9 |  |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-IQ3_S.gguf) | i1-IQ3_S | 15.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-IQ3_M.gguf) | i1-IQ3_M | 16.1 |  |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-Q3_K_L.gguf) | i1-Q3_K_L | 19.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.5 |  |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-Q4_0.gguf) | i1-Q4_0 | 20.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-Q4_K_M.gguf) | i1-Q4_K_M | 22.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-Q5_K_S.gguf) | i1-Q5_K_S | 25.1 |  |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.9 |  |
| [GGUF](https://huggingface.co/mradermacher/Umbra-v2.1-MoE-4x10.7-i1-GGUF/resolve/main/Umbra-v2.1-MoE-4x10.7.i1-Q6_K.gguf) | i1-Q6_K | 29.9 | practically like static Q6_K |

Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

## FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.

## Thanks

I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.

<!-- end -->