|
--- |
|
license: cc-by-nc-nd-4.0 |
|
tags: |
|
- not-for-all-audiences |
|
--- |
|
|
|
# merge |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [huihui-ai/QwQ-32B-Preview-abliterated](https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated) |
|
* [huihui-ai/Qwen2.5-32B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-32B-Instruct-abliterated) |
|
* [ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3](https://huggingface.co/ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3) |
|
* [gctian/qwen2.5-32B-roleplay-zh](https://huggingface.co/gctian/qwen2.5-32B-roleplay-zh) |
|
* [jpacifico/Chocolatine-32B-Instruct-DPO-v1.2](https://huggingface.co/jpacifico/Chocolatine-32B-Instruct-DPO-v1.2) |
|
* [nbeerbower/Qwen2.5-Gutenberg-Doppel-32B](https://huggingface.co/nbeerbower/Qwen2.5-Gutenberg-Doppel-32B) |
|
* [EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2) |
|
* [crestf411/Q2.5-32B-Slush](https://huggingface.co/crestf411/Q2.5-32B-Slush) |
|
* [AiCloser/Qwen2.5-32B-AGI](https://huggingface.co/AiCloser/Qwen2.5-32B-AGI) |
|
* [AXCXEPT/EZO-Qwen2.5-32B-Instruct](https://huggingface.co/AXCXEPT/EZO-Qwen2.5-32B-Instruct) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 |
|
parameters: |
|
weight: 1.0 |
|
density: 0.85 |
|
- model: nbeerbower/Qwen2.5-Gutenberg-Doppel-32B |
|
parameters: |
|
weight: 0.30 |
|
density: 0.80 |
|
- model: crestf411/Q2.5-32B-Slush |
|
parameters: |
|
weight: 0.25 |
|
density: 0.80 |
|
- model: huihui-ai/Qwen2.5-32B-Instruct-abliterated |
|
parameters: |
|
weight: 0.23 |
|
density: 0.78 |
|
- model: gctian/qwen2.5-32B-roleplay-zh |
|
parameters: |
|
weight: 0.22 |
|
density: 0.75 |
|
- model: ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3 |
|
parameters: |
|
weight: 0.20 |
|
density: 0.73 |
|
- model: jpacifico/Chocolatine-32B-Instruct-DPO-v1.2 |
|
parameters: |
|
weight: 0.19 |
|
density: 0.72 |
|
- model: AXCXEPT/EZO-Qwen2.5-32B-Instruct |
|
parameters: |
|
weight: 0.18 |
|
density: 0.72 |
|
- model: AiCloser/Qwen2.5-32B-AGI |
|
parameters: |
|
weight: 0.15 |
|
density: 0.68 |
|
- model: huihui-ai/QwQ-32B-Preview-abliterated |
|
parameters: |
|
weight: 0.14 |
|
density: 0.68 |
|
|
|
merge_method: dare_ties |
|
base_model: Qwen/QwQ-32B-Preview |
|
parameters: |
|
density: 0.84 |
|
epsilon: 0.07 |
|
lambda: 1.24 |
|
dtype: bfloat16 |
|
tokenizer_source: union |
|
|
|
``` |