Suparious's picture
Update README.md
706edbc verified
---
license: apache-2.0
language:
- en
base_model:
- mistralai/Mistral-7B-v0.1
- maywell/PiVoT-0.1-Starling-LM-RP
- senseable/WestLake-7B-v2
- CalderaAI/Naberius-7B
- cgato/Thespis-Mistral-7b-v0.7
- NeverSleep/Noromaid-7B-0.4-DPO
- SanjiWatsuki/Silicon-Maid-7B
- lemonilia/AshhLimaRP-Mistral-7B
- NurtureAI/neural-chat-7b-v3-16k
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- text-generation-inference
---
# SultrySilicon-7B-V2
- Original Mergekit config: [KatyTestHistorical/SultrySilicon-7B-V2](https://huggingface.co/KatyTestHistorical/SultrySilicon-7B-V2-GGUF/blob/main/SultrySilicon-7B-V2.yaml)
- Model Author: [KatyTheCutie](https://huggingface.co/KatyTheCutie)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/653a2392341143f7774424d8/E-r1weHcxpXdkZ20eImdn.png)
## Model Summary
Experimental 7B RP focused model - Feedback appreciated!
(V2 is a bit more sultry and lewd)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP)
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
* [CalderaAI/Naberius-7B](https://huggingface.co/CalderaAI/Naberius-7B)
* [cgato/Thespis-Mistral-7b-v0.7](https://huggingface.co/cgato/Thespis-Mistral-7b-v0.7)
* [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)
* [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)
* [lemonilia/AshhLimaRP-Mistral-7B](https://huggingface.co/lemonilia/AshhLimaRP-Mistral-7B)
* [NurtureAI/neural-chat-7b-v3-16k](https://huggingface.co/NurtureAI/neural-chat-7b-v3-16k)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
base_model: mistralai/Mistral-7B-v0.1
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: NeverSleep/Noromaid-7B-0.4-DPO
parameters:
weight: 0.37
- model: lemonilia/AshhLimaRP-Mistral-7B
parameters:
weight: 0.29
- model: NurtureAI/neural-chat-7b-v3-16k
parameters:
weight: 0.23
- model: cgato/Thespis-Mistral-7b-v0.7
parameters:
weight: 0.23
- model: CalderaAI/Naberius-7B
parameters:
weight: 0.15
- model: SanjiWatsuki/Silicon-Maid-7B
parameters:
weight: 0.25
- model: senseable/WestLake-7B-v2
parameters:
weight: 0.27
- model: maywell/PiVoT-0.1-Starling-LM-RP
parameters:
weight: 0.27
dtype: float16
```