ToastyPigeon commited on
Commit
7ea94b0
·
verified ·
1 Parent(s): f66536a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -72
README.md CHANGED
@@ -1,72 +1,72 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- # merge
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using D:/MLnonsense/models/TeeZee_Mistral-7B-v0.1-fp32 as a base.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * D:/MLnonsense/models/fearlessdots_WizardLM-2-7B-abliterated
22
- * D:/MLnonsense/models/Gryphe_MythoMist-7b
23
- * D:/MLnonsense/models/Sao10K_Frostwind-v2.1-m7
24
- * D:/MLnonsense/models/senseable_Westlake-7b-v2
25
- * D:/MLnonsense/models/maywell_PiVoT-0.1-Evil-a
26
- * D:/MLnonsense/models/Undi95_Toppy-M-7B
27
-
28
- ### Configuration
29
-
30
- The following YAML configuration was used to produce this model:
31
-
32
- ```yaml
33
- models:
34
- - model: D:/MLnonsense/models/fearlessdots_WizardLM-2-7B-abliterated
35
- parameters:
36
- weight: 1.0
37
- - model: D:/MLnonsense/models/Undi95_Toppy-M-7B
38
- parameters:
39
- weight:
40
- - filter: self_attn
41
- value: 0.8
42
- - value: 0.5
43
- - model: D:/MLnonsense/models/senseable_Westlake-7b-v2
44
- parameters:
45
- weight:
46
- - filter: self_attn
47
- value: 0.6
48
- - value: 0.4
49
- - model: D:/MLnonsense/models/maywell_PiVoT-0.1-Evil-a
50
- parameters:
51
- weight:
52
- - filter: mlp
53
- value: 0.2
54
- - value: 0.0
55
- - model: D:/MLnonsense/models/Sao10K_Frostwind-v2.1-m7
56
- parameters:
57
- weight:
58
- - filter: self_attn
59
- value: 0.2
60
- - filter: mlp
61
- value: 0.8
62
- - value: 0.5
63
- - model: D:/MLnonsense/models/Gryphe_MythoMist-7b
64
- parameters:
65
- weight:
66
- - filter: mlp
67
- value: 0.6
68
- - value: 0.0
69
- base_model: D:/MLnonsense/models/TeeZee_Mistral-7B-v0.1-fp32
70
- merge_method: task_arithmetic
71
- dtype: float32
72
- ```
 
1
+ ---
2
+ base_model: []
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+
8
+ ---
9
+ # merge
10
+
11
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
+
13
+ ## Merge Details
14
+ ### Merge Method
15
+
16
+ This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using Mistral-7B-v0.1 as a base.
17
+
18
+ ### Models Merged
19
+
20
+ The following models were included in the merge:
21
+ * fearlessdots/WizardLM-2-7B-abliterated
22
+ * Gryphe/MythoMist-7b
23
+ * Sao10K/Frostwind-v2.1-m7
24
+ * senseable/Westlake-7b-v2
25
+ * maywell/PiVoT-0.1-Evil-a
26
+ * Undi95/Toppy-M-7B
27
+
28
+ ### Configuration
29
+
30
+ The following YAML configuration was used to produce this model:
31
+
32
+ ```yaml
33
+ models:
34
+ - model: fearlessdots/WizardLM-2-7B-abliterated
35
+ parameters:
36
+ weight: 1.0
37
+ - model: Undi95/Toppy-M-7B
38
+ parameters:
39
+ weight:
40
+ - filter: self_attn
41
+ value: 0.8
42
+ - value: 0.5
43
+ - model: senseable/Westlake-7b-v2
44
+ parameters:
45
+ weight:
46
+ - filter: self_attn
47
+ value: 0.6
48
+ - value: 0.4
49
+ - model: maywell/PiVoT-0.1-Evil-a
50
+ parameters:
51
+ weight:
52
+ - filter: mlp
53
+ value: 0.2
54
+ - value: 0.0
55
+ - model: Sao10K/Frostwind-v2.1-m7
56
+ parameters:
57
+ weight:
58
+ - filter: self_attn
59
+ value: 0.2
60
+ - filter: mlp
61
+ value: 0.8
62
+ - value: 0.5
63
+ - model: Gryphe/MythoMist-7b
64
+ parameters:
65
+ weight:
66
+ - filter: mlp
67
+ value: 0.6
68
+ - value: 0.0
69
+ base_model: TeeZee/Mistral-7B-v0.1-fp32
70
+ merge_method: task_arithmetic
71
+ dtype: float32
72
+ ```