Natkituwu commited on
Commit
701e5b1
·
verified ·
1 Parent(s): cdff74e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md CHANGED
@@ -1,3 +1,56 @@
1
  ---
 
 
 
 
 
 
 
 
 
2
  license: cc-by-nc-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - grimjim/kukulemon-7B
4
+ - Nitral-AI/Kunocchini-7b-128k-test
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ - mistral
10
+ - alpaca
11
  license: cc-by-nc-4.0
12
  ---
13
+
14
+ # Kunokukulemonchini-7b-3.5bpw-exl2
15
+
16
+ This is an 3.5 bpw exl2 quant of a merger [icefog72/Kunokukulemonchini-7b](https://huggingface.co/icefog72/Kunokukulemonchini-7b).
17
+
18
+ Only use if you have low end hardware. I recommend using either the 4.1bpw version or the 6.5 bpw version if you have the resources to.
19
+
20
+ ## Merge Details
21
+
22
+ Slightly edited kukulemon-7B config.json before merge to get at least ~32k context window.
23
+
24
+ ### Merge Method
25
+
26
+ This model was merged using the SLERP merge method.
27
+
28
+ ### Models Merged
29
+
30
+ The following models were included in the merge:
31
+ * [grimjim/grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B)
32
+ * [Nitral-AI/Kunocchini-7b-128k-test](https://huggingface.co/Nitral-AI/Kunocchini-7b-128k-test)
33
+
34
+ ### Configuration
35
+
36
+ The following YAML configuration was used to produce this model:
37
+
38
+ ```yaml
39
+
40
+ slices:
41
+ - sources:
42
+ - model: grimjim/kukulemon-7B
43
+ layer_range: [0, 32]
44
+ - model: Nitral-AI/Kunocchini-7b-128k-test
45
+ layer_range: [0, 32]
46
+ merge_method: slerp
47
+ base_model: Nitral-AI/Kunocchini-7b-128k-test
48
+ parameters:
49
+ t:
50
+ - filter: self_attn
51
+ value: [0, 0.5, 0.3, 0.7, 1]
52
+ - filter: mlp
53
+ value: [1, 0.5, 0.7, 0.3, 0]
54
+ - value: 0.5
55
+ dtype: float16
56
+ ```