Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Lemon Kunoichi Wizard - 7b
|
2 |
|
3 |
![LemonKunoichiWizard](https://files.catbox.moe/eivabp.png)
|
@@ -15,6 +30,7 @@ Alpaca preset seems to work well with your own System Prompt.
|
|
15 |
The model loads at 8192 on my end, but theoretically it should be able to go up to 32k. Not that it'll be coherent at 32k. Most models based on Mistral like this end up being - at best - 12k context size for coherent output. I only tested at 8k which is where the base models tend to shine. YMMV otherwise.
|
16 |
|
17 |
---
|
|
|
18 |
base_model:
|
19 |
- SanjiWatsuki/Kunoichi-DPO-v2-7B
|
20 |
- dreamgen/WizardLM-2-7B
|
@@ -45,17 +61,3 @@ The following models were included in the merge:
|
|
45 |
|
46 |
The following YAML configuration was used to produce this model:
|
47 |
|
48 |
-
```yaml
|
49 |
-
models:
|
50 |
-
- model: KatyTheCutie/LemonadeRP-4.5.3
|
51 |
-
parameters:
|
52 |
-
weight: 1.0
|
53 |
-
- model: dreamgen/WizardLM-2-7B
|
54 |
-
parameters:
|
55 |
-
weight: 0.2
|
56 |
-
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
|
57 |
-
parameters:
|
58 |
-
weight: 0.6
|
59 |
-
merge_method: linear
|
60 |
-
dtype: float16
|
61 |
-
```
|
|
|
1 |
+
```yaml
|
2 |
+
models:
|
3 |
+
- model: KatyTheCutie/LemonadeRP-4.5.3
|
4 |
+
parameters:
|
5 |
+
weight: 1.0
|
6 |
+
- model: dreamgen/WizardLM-2-7B
|
7 |
+
parameters:
|
8 |
+
weight: 0.2
|
9 |
+
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
|
10 |
+
parameters:
|
11 |
+
weight: 0.6
|
12 |
+
merge_method: linear
|
13 |
+
dtype: float16
|
14 |
+
```
|
15 |
+
|
16 |
# Lemon Kunoichi Wizard - 7b
|
17 |
|
18 |
![LemonKunoichiWizard](https://files.catbox.moe/eivabp.png)
|
|
|
30 |
The model loads at 8192 on my end, but theoretically it should be able to go up to 32k. Not that it'll be coherent at 32k. Most models based on Mistral like this end up being - at best - 12k context size for coherent output. I only tested at 8k which is where the base models tend to shine. YMMV otherwise.
|
31 |
|
32 |
---
|
33 |
+
|
34 |
base_model:
|
35 |
- SanjiWatsuki/Kunoichi-DPO-v2-7B
|
36 |
- dreamgen/WizardLM-2-7B
|
|
|
61 |
|
62 |
The following YAML configuration was used to produce this model:
|
63 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|