knifeayumu commited on
Commit
386932c
1 Parent(s): 4519c9e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -38
README.md CHANGED
@@ -1,38 +1,51 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- # Lite-Cydonia-22B-v1.1
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using G:/AI-TextGen/text-generation-webui/models/Mistral-Small-Instruct-2409 as a base.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * G:/AI-TextGen/text-generation-webui/models/Cydonia-22B-v1.1
22
-
23
- ### Configuration
24
-
25
- The following YAML configuration was used to produce this model:
26
-
27
- ```yaml
28
- models:
29
- - model: G:/AI-TextGen/text-generation-webui/models/Mistral-Small-Instruct-2409
30
- parameters:
31
- weight: 0.5
32
- - model: G:/AI-TextGen/text-generation-webui/models/Cydonia-22B-v1.1
33
- parameters:
34
- weight: 0.5
35
- merge_method: task_arithmetic
36
- base_model: G:/AI-TextGen/text-generation-webui/models/Mistral-Small-Instruct-2409
37
- dtype: float16
38
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - mistralai/Mistral-Small-Instruct-2409
4
+ - TheDrummer/Cydonia-22B-v1.1
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ license: other
10
+ ---
11
+ ![The Drummer turns into a Joshi Youchien](https://huggingface.co/knifeayumu/Lite-Cydonia-22B-v1.1-50-50/resolve/main/Lite-Cydonia-22B-v1.1.png)
12
+ "A balancing act between smart do-gooders and creative evil-doers."
13
+
14
+ # The Drummer turns into a Joshi Youchien
15
+
16
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit).
17
+
18
+ ## Inspiration
19
+
20
+ I thought both [BeaverAI/Cydonia-22B-v1f-GGUF](https://huggingface.co/TheDrummer/Cydonia-22B-v1.1) and [BeaverAI/Cydonia-22B-v1e-GGUF](https://huggingface.co/BeaverAI/Cydonia-22B-v1e-GGUF) versions were a bit too evil. The sense of morality was too screwed up, and it was quite deterministic (swipes didn't offer much variety) compared to the base model. Then an idea popped into my mind — why not merge it back with the base model? That way, we could give it a sense of "good" again, at least a little. Maybe this would also fix some of the deterministic generations.
21
+
22
+ Quick testing shows... it works? Zero-shot evil Q&A no longer works, but with a bit of persuasion, it did answer. Unlike [knifeayumu/Lite-Cydonia-22B-v1.1-75-25](https://huggingface.co/knifeayumu/Lite-Cydonia-22B-v1.1-75-25), this merge is more censored but also smarter.
23
+
24
+ Credits to [TheDrummer](https://huggingface.co/TheDrummer) and [BeaverAI](https://huggingface.co/BeaverAI) who makes such finetunes. "Lightly decensored" is a heavy understatement in this case.
25
+
26
+ ## Merge Details
27
+ ### Merge Method
28
+
29
+ This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mistralai/Mistral-Small-Instruct-2409](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409) as a base.
30
+
31
+ ### Models Merged
32
+
33
+ The following models were included in the merge:
34
+ * [TheDrummer/Cydonia-22B-v1.1](https://huggingface.co/TheDrummer/Cydonia-22B-v1.1)
35
+
36
+ ### Configuration
37
+
38
+ The following YAML configuration was used to produce this model:
39
+
40
+ ```yaml
41
+ models:
42
+ - model: mistralai/Mistral-Small-Instruct-2409
43
+ parameters:
44
+ weight: 0.5
45
+ - model: TheDrummer/Cydonia-22B-v1.1
46
+ parameters:
47
+ weight: 0.5
48
+ merge_method: task_arithmetic
49
+ base_model: mistralai/Mistral-Small-Instruct-2409
50
+ dtype: float16
51
+ ```