--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE - ND911/EE-LMaid-7B-Slerp --- ![](maid.jpeg) Experimental RP merges - using SillyTavern with Min-P SanjiWatsuki/Loyal-Macaroni-Maid-7B, merged with ND911/EE-Maid-7B-Slerp which is a merge of SanjiWatsuki/Silicon-Maid-7B and maywell/Synatra-7B-v0.3-RP EE-LMaid-7B-Slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B) * [ND911/EE-Maid-7B-Slerp](https://huggingface.co/ND911/EE-Maid-7B-Slerp) # Franken-Maid-Slerp Franken-Maid-Slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE](https://huggingface.co/SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE) * [ND911/EE-LMaid-7B-Slerp](https://huggingface.co/ND911/EE-LMaid-7B-Slerp) ## 🧩 Configuration ```yaml slices: - sources: - model: SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE layer_range: [0, 32] - model: ND911/EE-LMaid-7B-Slerp layer_range: [0, 32] merge_method: slerp base_model: ND911/EE-LMaid-7B-Slerp parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```