Update README.md
Browse files
README.md
CHANGED
@@ -5,22 +5,25 @@ library_name: transformers
|
|
5 |
tags:
|
6 |
- mergekit
|
7 |
- merge
|
8 |
-
|
9 |
---
|
10 |
-
#
|
|
|
|
|
|
|
11 |
|
12 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
13 |
|
14 |
## Merge Details
|
15 |
### Merge Method
|
16 |
|
17 |
-
|
18 |
|
19 |
### Models Merged
|
20 |
|
21 |
The following models were included in the merge:
|
22 |
* [Nitral-AI/Infinitely-Laydiculous-7B](https://huggingface.co/Nitral-AI/Infinitely-Laydiculous-7B)
|
23 |
-
*
|
24 |
|
25 |
### Configuration
|
26 |
|
@@ -36,5 +39,4 @@ slices:
|
|
36 |
layer_range: [12, 32]
|
37 |
merge_method: passthrough
|
38 |
dtype: float16
|
39 |
-
|
40 |
-
```
|
|
|
5 |
tags:
|
6 |
- mergekit
|
7 |
- merge
|
8 |
+
- not-for-all-audiences
|
9 |
---
|
10 |
+
# Infinite-Laymons-9B
|
11 |
+
|
12 |
+
This model is intended for fictional role-play and storytelling.
|
13 |
+
The focus is on original responses and elimitation of refusals.
|
14 |
|
15 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
16 |
|
17 |
## Merge Details
|
18 |
### Merge Method
|
19 |
|
20 |
+
First two models were merged with SLERP, and the final model was merged using the passthrough merge method.
|
21 |
|
22 |
### Models Merged
|
23 |
|
24 |
The following models were included in the merge:
|
25 |
* [Nitral-AI/Infinitely-Laydiculous-7B](https://huggingface.co/Nitral-AI/Infinitely-Laydiculous-7B)
|
26 |
+
* ABX-AI/./MODELS/Infinite-Laymons-7B
|
27 |
|
28 |
### Configuration
|
29 |
|
|
|
39 |
layer_range: [12, 32]
|
40 |
merge_method: passthrough
|
41 |
dtype: float16
|
42 |
+
```
|
|