RichardErkhov commited on
Commit
f227835
1 Parent(s): 4f5589e

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +110 -0
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ UnFimbulvetr-20B - GGUF
11
+ - Model creator: https://huggingface.co/KaraKaraWitch/
12
+ - Original model: https://huggingface.co/KaraKaraWitch/UnFimbulvetr-20B/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [UnFimbulvetr-20B.Q2_K.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q2_K.gguf) | Q2_K | 6.87GB |
18
+ | [UnFimbulvetr-20B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q3_K_S.gguf) | Q3_K_S | 8.01GB |
19
+ | [UnFimbulvetr-20B.Q3_K.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q3_K.gguf) | Q3_K | 8.93GB |
20
+ | [UnFimbulvetr-20B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q3_K_M.gguf) | Q3_K_M | 8.93GB |
21
+ | [UnFimbulvetr-20B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q3_K_L.gguf) | Q3_K_L | 9.73GB |
22
+ | [UnFimbulvetr-20B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.IQ4_XS.gguf) | IQ4_XS | 10.03GB |
23
+ | [UnFimbulvetr-20B.Q4_0.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q4_0.gguf) | Q4_0 | 10.46GB |
24
+ | [UnFimbulvetr-20B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.IQ4_NL.gguf) | IQ4_NL | 10.57GB |
25
+ | [UnFimbulvetr-20B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q4_K_S.gguf) | Q4_K_S | 10.53GB |
26
+ | [UnFimbulvetr-20B.Q4_K.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q4_K.gguf) | Q4_K | 11.14GB |
27
+ | [UnFimbulvetr-20B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q4_K_M.gguf) | Q4_K_M | 11.14GB |
28
+ | [UnFimbulvetr-20B.Q4_1.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q4_1.gguf) | Q4_1 | 11.61GB |
29
+ | [UnFimbulvetr-20B.Q5_0.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q5_0.gguf) | Q5_0 | 12.76GB |
30
+ | [UnFimbulvetr-20B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q5_K_S.gguf) | Q5_K_S | 12.76GB |
31
+ | [UnFimbulvetr-20B.Q5_K.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q5_K.gguf) | Q5_K | 13.11GB |
32
+ | [UnFimbulvetr-20B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q5_K_M.gguf) | Q5_K_M | 13.11GB |
33
+ | [UnFimbulvetr-20B.Q5_1.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q5_1.gguf) | Q5_1 | 13.91GB |
34
+ | [UnFimbulvetr-20B.Q6_K.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q6_K.gguf) | Q6_K | 15.2GB |
35
+ | [UnFimbulvetr-20B.Q8_0.gguf](https://huggingface.co/RichardErkhov/KaraKaraWitch_-_UnFimbulvetr-20B-gguf/blob/main/UnFimbulvetr-20B.Q8_0.gguf) | Q8_0 | 19.69GB |
36
+
37
+
38
+
39
+
40
+ Original model description:
41
+ ---
42
+ base_model: ["Sao10K/Fimbulvetr-11B-v2"]
43
+ library_name: transformers
44
+ tags:
45
+ - mergekit
46
+ - merge
47
+
48
+ ---
49
+ # UnFimbulvetr-20B
50
+
51
+ ![](UnFimbulator.png "A Waifu that is disappointed in me with this cursed merge. ControlNet Image Source is from the original Fimbulvetr-11B-v2.")
52
+
53
+ *Waifu to catch your attention*
54
+
55
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
56
+
57
+ NOTE: *Only tested this just for a bit. YMMV.*
58
+
59
+ ## Next Day Tests...
60
+
61
+ Downloaded the GGUF model that someone quantized... And... nope. No.
62
+
63
+ **Do not use model.**
64
+
65
+ ## Merge Details
66
+ ### Merge Method
67
+
68
+ This model was merged using the passthrough merge method.
69
+
70
+ ### Models Merged
71
+
72
+ The following models were included in the merge:
73
+ * Sao10K/Fimbulvetr-11B-v2
74
+
75
+ ### Configuration
76
+
77
+ The following YAML configuration was used to produce this model:
78
+
79
+ ```yaml
80
+ slices:
81
+ - sources:
82
+ - model: FimbMagic
83
+ layer_range: [0, 13]
84
+ - sources:
85
+ - model: FimbMagic
86
+ layer_range: [8, 13]
87
+ - sources:
88
+ - model: FimbMagic
89
+ layer_range: [12, 36]
90
+ - sources:
91
+ - model: FimbMagic
92
+ layer_range: [12, 36]
93
+ - sources:
94
+ - model: FimbMagic
95
+ layer_range: [36, 48]
96
+ - sources:
97
+ - model: FimbMagic
98
+ layer_range: [36, 48]
99
+ merge_method: passthrough
100
+ dtype: bfloat16
101
+ ```
102
+
103
+ ### Additional Notes
104
+
105
+ Fimbulvetr 11B is still a very good model. This model is for extreme trailblazers who wants to test stuff!
106
+
107
+ Eval results? Don't bother.
108
+
109
+ Last one before I sleep: *I'm so sorry Sao10K...*
110
+