divinetaco
commited on
9beb3a1b7a70cc23b9124997e9b4f9b21a4e1a6e425d12a8e6792683656146cc
Browse files- .gitattributes +2 -0
- README.md +59 -0
- aranea-tenebris.png +3 -0
- imatrix.dat +3 -0
.gitattributes
CHANGED
@@ -34,3 +34,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
aranea-tenebris-120b-v1.0-q2_k.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
aranea-tenebris-120b-v1.0-q2_k.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
aranea-tenebris.png filter=lfs diff=lfs merge=lfs -text
|
38 |
+
imatrix.dat filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,3 +1,62 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
+
base_model:
|
4 |
+
- Netrve/Miqu-PlayMaid-70B-v0.1
|
5 |
+
- ShinojiResearch/Senku-70B
|
6 |
+
library_name: transformers
|
7 |
+
tags:
|
8 |
+
- not-for-all-audiences
|
9 |
+
- nsfw
|
10 |
+
- mergekit
|
11 |
+
- merge
|
12 |
---
|
13 |
+
# aranea-tenebris-120b-v1.0-gguf
|
14 |
+
**aka Netrve/Miqu-PlayMaid-70B-v0.1 + ShinojiResearch/Senku-70B**
|
15 |
+
Model merge for uncensored creative writing and rp
|
16 |
+
|
17 |
+
![image/png](https://huggingface.co/divinetaco/aranea-tenebris-120b-v1.0-gguf/resolve/main/aranea-tenebris.png)
|
18 |
+
|
19 |
+
A [mergekit](https://github.com/arcee-ai/mergekit) frankenmerge based on [Netrve/Miqu-PlayMaid-70B-v0.1](https://huggingface.co/Netrve/Miqu-PlayMaid-70B-v0.1) with interleaved layers of [ShinojiResearch/Senku-70B](https://huggingface.co/ShinojiResearch/Senku-70B).
|
20 |
+
This was the top performing model from a second series of merge experiments to create a highly coherant creative writing and rp model.
|
21 |
+
Tests consisted of a series of private DnD scenario benchmarks, with manual comparison of the most promising merges.
|
22 |
+
|
23 |
+
A number of different base models, interleave models and layer offsets were compared.
|
24 |
+
This model outperformed a number of other popular 70B+ models and merges in both creativity and coherancy tests. It was (briefly) compared to Mixtral 8x22B running 2/3/4 experts.
|
25 |
+
|
26 |
+
- Usable context: ~32768
|
27 |
+
- Recommended prompt format: Alpaca
|
28 |
+
- Layers: 137
|
29 |
+
|
30 |
+
|
31 |
+
### Quantization
|
32 |
+
|
33 |
+
llama.cpp [imatrix.dat](./imatrix.dat)
|
34 |
+
|
35 |
+
Will upload a few quants when bandwidth permits.
|
36 |
+
|
37 |
+
### Testing
|
38 |
+
|
39 |
+
Two different writing styles were considered for each testing scenario:
|
40 |
+
- Completions for 3rd person narration. No character role was assumed.
|
41 |
+
- Completions for 1st and 2nd person turn based (out-of-order) rp. A character role was assumed by the model, but narration of minor characters and events was encouraged.
|
42 |
+
|
43 |
+
Tests assumed a mature audience, but a range of scenarios were constructed.
|
44 |
+
Thematic inconsistancy or bias in character behaviour was penalized heavily.
|
45 |
+
|
46 |
+
Models showing the following were penalized during manual comparison:
|
47 |
+
- Consistently short responses.
|
48 |
+
- Laziness or readily gave up on solving a character problem.
|
49 |
+
- Overly malleable, where characters could not hold opinions or beliefs.
|
50 |
+
- Passiveness or an inability to drive the narrative.
|
51 |
+
- Persistent repeats. Bad merges tend to latch onto and reuse specific keywords.
|
52 |
+
- Ignoring or missing obvious scenario solutions.
|
53 |
+
- Impersonating other major characters out of turn during rp tests.
|
54 |
+
- Faliure to follow a character's description. This criteria is pretty broad, and could include things like character skills, refusals etc.
|
55 |
+
- Major inconsistencies in scenes or recall. Note - invention of thematically consistant detail was encouraged.
|
56 |
+
|
57 |
+
### Interesting observations from benchmarking
|
58 |
+
|
59 |
+
- 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternative combinations for coherancy.
|
60 |
+
- 8 layer interleave stride with a 16 layer interleave width consistantly outperformed alternative combinations for creativity whilst remaining reasonably coherant.
|
61 |
+
- Regular stride intervals are not optimal. In particular offsetting the first or last set of base models offets often improved metrics.
|
62 |
+
- Goliath-120B is still a good standard for coherancy below 4096 context. A few miqu-1 merges are comparable, but testing found a small amount coherancy could be sacrificed for notable creativity improvements.
|
aranea-tenebris.png
ADDED
Git LFS Details
|
imatrix.dat
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d01fad66e09e09ee5cff5bf080fdc60c96ac80659f62da9919bc30a8c22ff1e5
|
3 |
+
size 42991202
|