ddh0's picture
typo
c9fb373 verified
|
raw
history blame
3.08 kB
---
base_model:
- sophosympatheia/Midnight-Miqu-70B-v1.5
- NeverSleep/MiquMaid-v3-70B
- maywell/miqu-evil-dpo
- 152334H/miqu-1-70b-sf
library_name: transformers
tags:
- mergekit
- merge
license: unknown
---
# MiquSuperdark-70B-v1
#### This model is outperformed by [MiquSuperdark-70B-v2](https://huggingface.co/ddh0/MiquSuperdark-70B-v2) - prefer that model over this model in all cases.
**MiquSuperdark-70B-v1** is a merge of three of the most popular Miqu-derived models, along with Miqu itself. The goal of the merge is to create an strong, well-rounded chat model that picks up desirable traits from its constituent models without sacrificing intelligence.
This is a DARE Linear merge with the following composition:
- [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5) at weight 0.4
- [NeverSleep/MiquMaid-v3-70B](https://huggingface.co/NeverSleep/MiquMaid-v3-70B) at weight 0.2
- [maywell/miqu-evil-dpo](https://huggingface.co/maywell/miqu-evil-dpo) at weight 0.2
- [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) at weight 0.2 (used as base model)
DARE Linear was chosen as the merge method based on [this HF discussion](https://huggingface.co/jukofyork/Dark-Miqu-70B/discussions/2), in which the creator of Midnight-Miqu says "*in my own testing I consistently got the best results from using a dare_linear merge when working with miqu models*".
## Prompt format
The model responds well to general-purpose prompt formats such as Alpaca. Alternatively, I suggest trying the following format, replacing `{the placeholder text}` with your actual messages, without curly brackets.
```
<message from="system">{your system prompt here}</message><message from="user">{user prompt here}</message><message from="bot">{bot response here}</message><message from="user">{user prompt here}</message><message from="bot">{bot response here}</message> [... and so on ...]
```
This format is readily understood by the model, and leads to the expected high-quality responses. Note the lack of newlines `\n` - they are not necessary and might actually make it harder for the model to follow along.
## Merge Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_linear
base_model: /home/dylan/Documents/AI/merge/miqu-1-70b-sf
models:
- model: /media/dylan/SanDisk/LLMs/Midnight-Miqu-70B-v1.5
parameters:
weight: 0.4
- model: /home/dylan/Documents/AI/merge/miqu-1-70b-sf
parameters:
weight: 0.2
- model: /media/dylan/SanDisk/LLMs/miqu-evil-dpo/
parameters:
weight: 0.2
- model: /home/dylan/Documents/AI/merge/MiquMaid-v3-70B
parameters:
weight: 0.2
dtype: float16
tokenizer_source: model:/home/dylan/Documents/AI/merge/miqu-1-70b-sf
```
The tokenizer is copied from the base model [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf).
#### This model is outperformed by [MiquSuperdark-70B-v2](https://huggingface.co/ddh0/MiquSuperdark-70B-v2) - prefer that model over this model in all cases.