Update README.md
Browse files
README.md
CHANGED
@@ -1,50 +1,96 @@
|
|
1 |
-
---
|
2 |
-
base_model:
|
3 |
-
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- Sao10K/L3-8B-Stheno-v3.2
|
4 |
+
- Sao10K/L3-8B-Niitama-v1
|
5 |
+
- princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2
|
6 |
+
tags:
|
7 |
+
- mergekit
|
8 |
+
- merge
|
9 |
+
- roleplay
|
10 |
+
- sillytavern
|
11 |
+
- llama3
|
12 |
+
- not-for-all-audiences
|
13 |
+
license: cc-by-nc-4.0
|
14 |
+
language:
|
15 |
+
- en
|
16 |
+
---
|
17 |
+
|
18 |
+
|
19 |
+
## SillyTavern
|
20 |
+
|
21 |
+
## Text Completion presets
|
22 |
+
```
|
23 |
+
temp 0.9
|
24 |
+
top_k 30
|
25 |
+
top_p 0.75
|
26 |
+
min_p 0.2
|
27 |
+
rep_pen 1.1
|
28 |
+
smooth_factor 0.25
|
29 |
+
smooth_curve 1
|
30 |
+
```
|
31 |
+
## Advanced Formatting
|
32 |
+
|
33 |
+
[Context & Instruct preset by Virt-io](https://huggingface.co/Virt-io/SillyTavern-Presets/tree/main/Prompts/LLAMA-3/v2.0)
|
34 |
+
|
35 |
+
Instruct Mode: Enabled
|
36 |
+
|
37 |
+
|
38 |
+
|
39 |
+
# merge
|
40 |
+
|
41 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
42 |
+
|
43 |
+
This model was merged using the slerp merge method.
|
44 |
+
|
45 |
+
### Models Merged
|
46 |
+
|
47 |
+
The following models were included in the merge:
|
48 |
+
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
|
49 |
+
* [Sao10K/L3-8B-Niitama-v1](https://huggingface.co/Sao10K/L3-8B-Niitama-v1)
|
50 |
+
* [princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2)
|
51 |
+
|
52 |
+
### Configuration
|
53 |
+
|
54 |
+
The following YAML configuration was used to produce this model:
|
55 |
+
|
56 |
+
```yaml
|
57 |
+
|
58 |
+
slices:
|
59 |
+
- sources:
|
60 |
+
- model: Sao10K/L3-8B-Niitama-v1
|
61 |
+
layer_range: [0, 32]
|
62 |
+
- model: Sao10K/L3-8B-Stheno-v3.2
|
63 |
+
layer_range: [0, 32]
|
64 |
+
merge_method: slerp
|
65 |
+
base_model: Sao10K/L3-8B-Niitama-v1
|
66 |
+
parameters:
|
67 |
+
t:
|
68 |
+
- filter: self_attn
|
69 |
+
value: [0.2, 0.4, 0.6, 0.2, 0.4]
|
70 |
+
- filter: mlp
|
71 |
+
value: [0.8, 0.6, 0.4, 0.8, 0.6]
|
72 |
+
- value: 0.4
|
73 |
+
dtype: bfloat16
|
74 |
+
|
75 |
+
|
76 |
+
slices:
|
77 |
+
- sources:
|
78 |
+
- model: tannedbum/L3-Niitama-Stheno-8B
|
79 |
+
layer_range: [0, 32]
|
80 |
+
- model: princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2
|
81 |
+
layer_range: [0, 32]
|
82 |
+
merge_method: slerp
|
83 |
+
base_model: tannedbum/L3-Niitama-Stheno-8B
|
84 |
+
parameters:
|
85 |
+
t:
|
86 |
+
- filter: self_attn
|
87 |
+
value: [0.2, 0.4, 0.6, 0.2, 0.4]
|
88 |
+
- filter: mlp
|
89 |
+
value: [0.8, 0.6, 0.4, 0.8, 0.6]
|
90 |
+
- value: 0.4
|
91 |
+
dtype: bfloat16
|
92 |
+
|
93 |
+
|
94 |
+
```
|
95 |
+
|
96 |
+
Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum
|