andersonarc commited on
Commit
5b53289
1 Parent(s): 5f49a15

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ maid-yuzu-v8-alter-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ maid-yuzu-v8-alter-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ maid-yuzu-v8-alter-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - mistralai/Mixtral-8x7B-v0.1
4
+ - mistralai/Mixtral-8x7B-Instruct-v0.1
5
+ - jondurbin/bagel-dpo-8x7b-v0.2
6
+ - cognitivecomputations/dolphin-2.7-mixtral-8x7b
7
+ - NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss
8
+ - ycros/BagelMIsteryTour-v2-8x7B
9
+ - smelborp/MixtralOrochi8x7B
10
+ library_name: transformers
11
+ tags:
12
+ - mergekit
13
+ - merge
14
+
15
+ ---
16
+ # maid-yuzu-v8-alter-GGUF
17
+
18
+ Quantized from https://huggingface.co/rhplus0831/maid-yuzu-v8-alter.
19
+
20
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
21
+
22
+ v7's approach worked better than I thought, so I tried something even weirder as a test. I don't think a proper model will come out, but I'm curious about the results.
23
+
24
+ ## Merge Details
25
+ ### Merge Method
26
+
27
+ This model was merged using the SLERP merge method.
28
+
29
+ This models were merged using the SLERP method in the following order:
30
+
31
+ maid-yuzu-v8-base: mistralai/Mixtral-8x7B-v0.1 + mistralai/Mixtral-8x7B-Instruct-v0.1 = 0.5
32
+ maid-yuzu-v8-step1: above + jondurbin/bagel-dpo-8x7b-v0.2 = 0.25
33
+ maid-yuzu-v8-step2: above + cognitivecomputations/dolphin-2.7-mixtral-8x7b = 0.25
34
+ maid-yuzu-v8-step3: above + NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss = 0.25
35
+ maid-yuzu-v8-step4-alter: above + ycros/BagelMIsteryTour-v2-8x7B = 0.5
36
+ maid-yuzu-v8-alter: above + smelborp/MixtralOrochi8x7B = 0.5
37
+
38
+ ### Models Merged
39
+
40
+ The following models were included in the merge:
41
+ * [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B)
42
+ * ../maid-yuzu-v8-step4-alter
43
+
44
+ ### Configuration
45
+
46
+ The following YAML configuration was used to produce this model:
47
+
48
+ ```yaml
49
+ base_model:
50
+ model:
51
+ path: ../maid-yuzu-v8-step4-alter
52
+ dtype: bfloat16
53
+ merge_method: slerp
54
+ parameters:
55
+ t:
56
+ - value: 0.5
57
+ slices:
58
+ - sources:
59
+ - layer_range: [0, 32]
60
+ model:
61
+ model:
62
+ path: ../maid-yuzu-v8-step4-alter
63
+ - layer_range: [0, 32]
64
+ model:
65
+ model:
66
+ path: smelborp/MixtralOrochi8x7B
67
+ ```
maid-yuzu-v8-alter-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c3ff1f0ad85fc90e30eb750b9e6ed94724ccda784c0b7fc80e664eb18e7c2c2
3
+ size 28865275264
maid-yuzu-v8-alter-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03ef838d84d486fe8b9c54a32728acdd1ec6c553597071646eff66a3ee12e9a1
3
+ size 33646388608
maid-yuzu-v8-alter-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aafd3e2ef946bf1a16ab9e0aa8f2d44536674e10fc33b5a03ef08431b5d07da8
3
+ size 38797624704