SanjiWatsuki commited on
Commit
847300f
1 Parent(s): 2b08c20

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +104 -1
README.md CHANGED
@@ -1,3 +1,106 @@
1
  ---
2
- license: cc-by-nc-4.0
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - merge
7
+ - not-for-all-audiences
8
+ - nsfw
9
  ---
10
+
11
+ <div style="display: flex; justify-content: center; align-items: center">
12
+ <img src="https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B/resolve/main/assets/cybermaid.png">
13
+ </div
14
+ >
15
+
16
+ <p align="center">
17
+ <big><b>Top 1 RP Performer on MT-bench 🤪</b
18
+ ></big>
19
+ </p>
20
+
21
+ <p align="center">
22
+ <strong>Next Gen Silicon-Based RP Maid</strong>
23
+ </p>
24
+
25
+ ## WTF is This?
26
+
27
+ Silicon-Maid-7B is another model targeted at being both strong at RP **and** being a smart cookie that can follow character cards very well. As of right now, Silicon-Maid-7B outscores both of my previous 7B RP models in my RP benchmark and I have been impressed by this model's creativity. It is suitable for RP/ERP and general use.
28
+
29
+ It's built on [xDAN-AI/xDAN-L1-Chat-RL-v1](https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1), a 7B model which scores unusually high on MT-Bench, and chargoddard/loyal-piano-m7, an Alpaca format 7B model with surprisingly creative outputs. I was excited to see this model for two main reasons:
30
+ * MT-Bench normally correlates well with real world model quality
31
+ * It was an Alpaca prompt model with high benches which meant I could try swapping out my Marcoroni frankenmerge used in my previous model.
32
+
33
+ **MT-Bench Average Turn**
34
+ | model | score | size
35
+ |--------------------|-----------|--------
36
+ | gpt-4 | 8.99 | -
37
+ | *xDAN-L1-Chat-RL-v1* | 8.35 | 7b
38
+ | Starling-7B | 8.09 | 7b
39
+ | Claude-2 | 8.06 | -
40
+ | **Silicon-Maid** | **7.96** | **7b**
41
+ | *Loyal-Macaroni-Maid*| 7.95 | 7b
42
+ | gpt-3.5-turbo | 7.94 | 20b?
43
+ | Claude-1 | 7.90 | -
44
+ | OpenChat-3.5 | 7.81 | -
45
+ | vicuna-33b-v1.3 | 7.12 | 33b
46
+ | wizardlm-30b | 7.01 | 30b
47
+ | Llama-2-70b-chat | 6.86 | 70b
48
+
49
+ <img src="https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B/resolve/main/assets/fig-silicon-loyal.png">
50
+
51
+ It's unclear to me if xDAN-L1-Chat-RL-v1 is overtly benchmaxxing but it seemed like a solid 7B from my limited testing (although nothing that screams 2nd best model behind GPT-4). Amusingly, the model lost almost all of its coding and math ability in the merge but somehow *improved* in "Extraction". This was a much greater MT-Bench dropoff than I expected, perhaps suggesting the Math/Coding ability in the original model was rather dense and susceptible to being lost to a DARE TIE merger?
52
+
53
+ Besides that, the merger is almost identical to the Loyal-Macaroni-Maid merger with a new base "smart cookie" model. If you liked any of my previous RP models, give this one a shot and let me know in the Community tab what you think!
54
+
55
+ ### The Sauce
56
+
57
+ ```
58
+ models: # Top-Loyal-Bruins-Maid-DARE-7B
59
+ - model: mistralai/Mistral-7B-v0.1
60
+ # no parameters necessary for base model
61
+ - model: xDAN-AI/xDAN-L1-Chat-RL-v1
62
+ parameters:
63
+ weight: 0.4
64
+ density: 0.8
65
+ - model: chargoddard/loyal-piano-m7
66
+ parameters:
67
+ weight: 0.3
68
+ density: 0.8
69
+ - model: Undi95/Toppy-M-7B
70
+ parameters:
71
+ weight: 0.2
72
+ density: 0.4
73
+ - model: NeverSleep/Noromaid-7b-v0.2
74
+ parameters:
75
+ weight: 0.2
76
+ density: 0.4
77
+ - model: athirdpath/NSFW_DPO_vmgb-7b
78
+ parameters:
79
+ weight: 0.2
80
+ density: 0.4
81
+ merge_method: dare_ties
82
+ base_model: mistralai/Mistral-7B-v0.1
83
+ parameters:
84
+ int8_mask: true
85
+ dtype: bfloat16
86
+ ```
87
+
88
+ For more information about why I use this merger, see the [Loyal-Macaroni-Maid repo](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B#the-sauce-all-you-need-is-dare)
89
+
90
+ ### Prompt Template (Alpaca)
91
+ I found the best SillyTavern results from using the Noromaid template but please try other templates! Let me know if you find anything good.
92
+
93
+ SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
94
+
95
+ Additionally, here is my highly recommended [Text Completion preset](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/blob/main/Characters/MinP.json). You can tweak this by adjusting temperature up or dropping min p to boost creativity or raise min p to increase stability. You shouldn't need to touch anything else!
96
+
97
+ ```
98
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
99
+
100
+ ### Instruction:
101
+ {prompt}
102
+
103
+ ### Response:
104
+ ```
105
+
106
+