Nitral
commited on
Commit
•
5fd9f1c
1
Parent(s):
de0e4d7
Update README.md
Browse files
README.md
CHANGED
@@ -1,13 +1,25 @@
|
|
1 |
---
|
2 |
base_model:
|
3 |
-
- Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context
|
4 |
- SanjiWatsuki/Kunoichi-DPO-v2-7B
|
|
|
5 |
library_name: transformers
|
6 |
tags:
|
7 |
- mergekit
|
8 |
- merge
|
9 |
-
|
|
|
10 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
# mergedmodel
|
12 |
|
13 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
@@ -20,8 +32,8 @@ This model was merged using the SLERP merge method.
|
|
20 |
### Models Merged
|
21 |
|
22 |
The following models were included in the merge:
|
23 |
-
* [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context)
|
24 |
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
|
|
|
25 |
|
26 |
### Configuration
|
27 |
|
@@ -30,12 +42,12 @@ The following YAML configuration was used to produce this model:
|
|
30 |
```yaml
|
31 |
slices:
|
32 |
- sources:
|
33 |
-
- model: Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context
|
34 |
-
layer_range: [0, 32]
|
35 |
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
|
36 |
layer_range: [0, 32]
|
|
|
|
|
37 |
merge_method: slerp
|
38 |
-
base_model:
|
39 |
parameters:
|
40 |
t:
|
41 |
- filter: self_attn
|
|
|
1 |
---
|
2 |
base_model:
|
|
|
3 |
- SanjiWatsuki/Kunoichi-DPO-v2-7B
|
4 |
+
- Epiculous/Fett-uccine-7B
|
5 |
library_name: transformers
|
6 |
tags:
|
7 |
- mergekit
|
8 |
- merge
|
9 |
+
- alpaca
|
10 |
+
- mistral
|
11 |
---
|
12 |
+
Thanks to @Epiculous for the dope model/ help with llm backends and support overall.
|
13 |
+
|
14 |
+
Id like to also thank @kalomaze for the dope sampler additions to ST.
|
15 |
+
|
16 |
+
@SanjiWatsuki Thank you very much for the help, and the model!
|
17 |
+
|
18 |
+
ST users can find the TextGenPreset in the folder labeled so.
|
19 |
+
|
20 |
+
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/9obNSalcJqCilQwr_4ssM.jpeg)
|
21 |
+
|
22 |
+
Quants: https://huggingface.co/bartowski/Kunocchini-exl2
|
23 |
# mergedmodel
|
24 |
|
25 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
|
|
32 |
### Models Merged
|
33 |
|
34 |
The following models were included in the merge:
|
|
|
35 |
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
|
36 |
+
* [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context)
|
37 |
|
38 |
### Configuration
|
39 |
|
|
|
42 |
```yaml
|
43 |
slices:
|
44 |
- sources:
|
|
|
|
|
45 |
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
|
46 |
layer_range: [0, 32]
|
47 |
+
- model: Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context
|
48 |
+
layer_range: [0, 32]
|
49 |
merge_method: slerp
|
50 |
+
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
|
51 |
parameters:
|
52 |
t:
|
53 |
- filter: self_attn
|