metadata
base_model:
- cognitivecomputations/dolphin-2.2.1-mistral-7b
- l3utterfly/mistral-7b-v0.1-layla-v4-chatml
- teknium/Mistral-Trismegistus-7B
- Aryanne/Open-StarLake-Swap-7B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
MixSwap
This is a merge of pre-trained language models created using mergekit, but my branch was used here
Merge Details
Merge Method
This model was merged using the task_swapping merge method using Aryanne/Open-StarLake-Swap-7B as a base.
Models Merged
The following models were included in the merge:
- cognitivecomputations/dolphin-2.2.1-mistral-7b
- teknium/Mistral-Trismegistus-7B
- l3utterfly/mistral-7b-v0.1-layla-v4-chatml
Prompt Format:
I prefer using this way, which seems to work.
Example using Koboldcpp:
Start Seq.:
\nYour_name:
End Seq.:
\nCharacter_name:
In Memory
### Instruction:
Character description.
Generate a endless verbose(very descriptive) role-play conversation with Character_name.
### Response:
Your_name: how are you doing babe? *Your_name approaches Character_name and kisses her in the lips*
Character_name: I'm fine, it's been an weird day. *Character_name blushes and hugs Your_name with love*
Configuration
The following YAML configuration was used to produce this model:
base_model:
model:
path: Aryanne/Open-StarLake-Swap-7B
dtype: bfloat16
merge_method: task_swapping
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: l3utterfly/mistral-7b-v0.1-layla-v4-chatml
parameters:
diagonal_offset: 4.0
random_mask: 0.1
random_mask_seed: 1956557.0
weight: 0.4
- layer_range: [0, 32]
model:
model:
path: cognitivecomputations/dolphin-2.2.1-mistral-7b
parameters:
diagonal_offset: 4.0
random_mask: 0.1
random_mask_seed: 18019.0
weight: 0.333
- layer_range: [0, 32]
model:
model:
path: teknium/Mistral-Trismegistus-7B
parameters:
diagonal_offset: 4.0
random_mask: 0.05
random_mask_seed: 666666.0
weight: 0.5
- layer_range: [0, 32]
model:
model:
path: Aryanne/Open-StarLake-Swap-7B