Nohobby's picture
Update README.md
1e205b2 verified
|
raw
history blame
5.58 kB
---
base_model:
- unsloth/Mistral-Small-Instruct-2409
- Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
- anthracite-org/magnum-v4-22b
- ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
- spow12/ChatWaifu_v2.0_22B
- rAIfle/Acolyte-22B
- Envoid/Mistral-Small-NovusKyver
- InferenceIllusionist/SorcererLM-22B
- allura-org/MS-Meadowlark-22B
- crestf411/MS-sunfall-v0.7.0
library_name: transformers
tags:
- mergekit
- merge
license: other
language:
- en
---
***
## Schisandra
Many thanks to the authors of the models used!
[RPMax v1.1](https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1) | [Pantheon-RP](https://huggingface.co/Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small) | [UnslopSmall-v1](https://huggingface.co/TheDrummer/UnslopSmall-22B-v1) | [Magnum V4](https://huggingface.co/anthracite-org/magnum-v4-22b) | [ChatWaifu v2.0](https://huggingface.co/spow12/ChatWaifu_v2.0_22B) | [SorcererLM](https://huggingface.co/InferenceIllusionist/SorcererLM-22B) | [Acolyte](https://huggingface.co/rAIfle/Acolyte-22B) | [NovusKyver](https://huggingface.co/Envoid/Mistral-Small-NovusKyver) | [Meadowlark](https://huggingface.co/allura-org/MS-Meadowlark-22B) | [Sunfall](https://huggingface.co/crestf411/MS-sunfall-v0.7.0)
***
### Overview
Main uses: RP, Storywriting
An intelligent model that is attentive to details and has a low-slop writing style. This time with a stable tokenizer.
Oh, and it now contains one more finetune! Not sure if some of them actually contribute to the output, but it's nice to see the numbers growing.
***
### Quants
GGUF: [Static](https://huggingface.co/mradermacher/MS-Schisandra-22B-v0.2-GGUF) | [Imatrix](https://huggingface.co/mradermacher/MS-Schisandra-22B-v0.2-i1-GGUF)
exl2: [4.65bpw](https://huggingface.co/waldie/MS-Schisandra-22B-v0.2-4.65bpw-h6-exl2) [5.5bpw](https://huggingface.co/waldie/MS-Schisandra-22B-v0.2-5.5bpw-h6-exl2) [6.5bpw](https://huggingface.co/waldie/MS-Schisandra-22B-v0.2-6.5bpw-h6-exl2)
***
### Settings
Prompt format: Mistral-V3 or [this](https://huggingface.co/ToastyPigeon/ST-Presets-Mistral-Small/resolve/main/ST-formatting-Mistral-Small-Oneshot.json)
Samplers: [These](https://qu.ax/BCRKS.json) or [These](https://qu.ax/OusTx.json)
***
## Merge Details
### Merging steps
## Step1
(Config partially taken from [here](https://huggingface.co/Casual-Autopsy/L3-Super-Nova-RP-8B)
```yaml
base_model: spow12/ChatWaifu_v2.0_22B
parameters:
int8_mask: true
rescale: true
normalize: false
dtype: bfloat16
tokenizer_source: base
merge_method: della
models:
- model: Envoid/Mistral-Small-NovusKyver
parameters:
density: [0.35, 0.65, 0.5, 0.65, 0.35]
epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
lambda: 0.85
weight: [-0.01891, 0.01554, -0.01325, 0.01791, -0.01458]
- model: rAIfle/Acolyte-22B
parameters:
density: [0.6, 0.4, 0.5, 0.4, 0.6]
epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
lambda: 0.85
weight: [0.01847, -0.01468, 0.01503, -0.01822, 0.01459]
```
## Step2
(Config partially taken from [here](https://huggingface.co/Casual-Autopsy/L3-Super-Nova-RP-8B))
```yaml
base_model: InferenceIllusionist/SorcererLM-22B
parameters:
int8_mask: true
rescale: true
normalize: false
dtype: bfloat16
tokenizer_source: base
merge_method: della
models:
- model: crestf411/MS-sunfall-v0.7.0
parameters:
density: [0.35, 0.65, 0.5, 0.65, 0.35]
epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
lambda: 0.85
weight: [-0.01891, 0.01554, -0.01325, 0.01791, -0.01458]
- model: anthracite-org/magnum-v4-22b
parameters:
density: [0.6, 0.4, 0.5, 0.4, 0.6]
epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
lambda: 0.85
weight: [0.01847, -0.01468, 0.01503, -0.01822, 0.01459]
```
## SchisandraVA2
(Config taken from [here](https://huggingface.co/HiroseKoichi/Llama-3-8B-Stroganoff-4.0))
```yaml
merge_method: della_linear
dtype: bfloat16
parameters:
normalize: true
int8_mask: true
tokenizer_source: base
base_model: TheDrummer/UnslopSmall-22B-v1
models:
- model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
parameters:
density: 0.55
weight: 1
- model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
parameters:
density: 0.55
weight: 1
- model: Step1
parameters:
density: 0.55
weight: 1
- model: allura-org/MS-Meadowlark-22B
parameters:
density: 0.55
weight: 1
- model: Step2
parameters:
density: 0.55
weight: 1
```
## Schisandra-v0.2
```yaml
dtype: bfloat16
tokenizer_source: base
merge_method: della_linear
parameters:
density: 0.5
base_model: SchisandraVA2
models:
- model: unsloth/Mistral-Small-Instruct-2409
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1]
- filter: up_proj
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- value: 0
- model: SchisandraVA2
parameters:
weight:
- filter: v_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: o_proj
value: [0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0]
- filter: up_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: gate_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: down_proj
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
- value: 1
```