Merged Models
Collection
Using mergekit
•
10 items
•
Updated
•
3
It's experimental, but seems fine for me, I didn't run it deeply yet but should be good for Role-play 😈 considering the two merged models, feel free to leave a suggestion or feedback.
This is a merge of pre-trained language models created using mergekit(my experimental branch swapping here )
This model was merged using the task_swapping merge method using NeuralNovel/Senzu-7B-v0.1-DPO as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: task_swapping
base_model: NeuralNovel/Senzu-7B-v0.1-DPO
models:
- model: senseable/WestLake-7B-v2
parameters:
weight: 0.75
diagonal_offset: 2 #it doesn't do anything when you use random_mask
random_mask: 0.3333
random_mask_seed: 98557
dtype: bfloat16
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 67.28 |
AI2 Reasoning Challenge (25-Shot) | 68.34 |
HellaSwag (10-Shot) | 85.70 |
MMLU (5-Shot) | 64.14 |
TruthfulQA (0-shot) | 50.43 |
Winogrande (5-shot) | 82.48 |
GSM8k (5-shot) | 52.62 |