File size: 1,504 Bytes
4cbf464 43825c3 4cbf464 43825c3 11ae11a 43825c3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
license: cc-by-nc-4.0
tags:
- merge
- conversational
- multi-task
pipeline_tag: text-generation
---
# Winter Garden 7B - β
It was mentioned that we are in the open ai dark winter; so I thought I would make myself a nice winter garden.
## An experiment
I've merged four partitions successfully in the past, so lets go for 9! I started with:
* Mistral-7B-v0.1
and merged in
* ZySec-7B-v1
* LemonadeRP-4.5.3
* dpo-binarized-NeutrixOmnibe-7B
* Multi-Verse-RP-7B
* AlphaMonarch-7B
* opus-v1.2-7b
* Kunoichi-DPO-v2-7B
* Noromaid-7B-0.4-DPO
* ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
### 9-partition merge
All of the layers were partitioned in to 9 random bins. Alternating models were slerped at [0...1], and [1...0] gradients; except attention, which was slerped at 0.03.
This means that the model is still predominantly ordered around base mistral - including half of the input and output layers, and 28% of attention.
### Other
Includes fast tokenizer.
## Chat Template
I put a conversational chat template, which takes "name", "to" (optional), and "content" as the turns. It is designed to follow a transcript style chat which is used by some of the models. This type of use-case is best done by outlining a scene and creating a character card.
```
### {% title %}
{% metadata %}
USER: Hello
ASSISTANT: Hi, how are you?
```
Initial tests show this model is a real talker. If you prompt with ```[WP] <prompt>\n\n``` it will take right off.
## Scores
Metric | Score
---|--- |