maldv's picture
Update README.md
47bf701 verified
---
license: apache-2.0
library_name: transformers
language:
- en
tags:
- chat
- conversational
base_model:
- Qwen/Qwen2.5-32B
- AiCloser/Qwen2.5-32B-AGI
- EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
- fblgit/TheBeagle-v2beta-32B-MGS
- huihui-ai/Qwen2.5-32B-Instruct-abliterated
- huihui-ai/QwQ-32B-Preview-abliterated
- Qwen/QwQ-32B-Preview
- rombodawg/Rombos-LLM-V2.5-Qwen-32b
- nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/sF7RDZA7lFYOmGy4bGy1s.png)
[imat quants](https://huggingface.co/mradermacher/Qwentile2.5-32B-Instruct-i1-GGUF)
# Qwentile 2.5 32B Instruct
Qwentile 2.5 32B Instruct is a *normalized denoised fourier interpolation* of the following models:
```yaml
output_base_model: "Qwen/Qwen2.5-32B"
finetune_merge:
- { "model": "AiCloser/Qwen2.5-32B-AGI", "base": "Qwen/Qwen2.5-32B", "alpha": 0.3 }
- { "model": "EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2", "base": "Qwen/Qwen2.5-32B", "alpha": 0.7 }
- { "model": "fblgit/TheBeagle-v2beta-32B-MGS", "base": "Qwen/Qwen2.5-32B", "alpha": 0.6 }
- { "model": "huihui-ai/Qwen2.5-32B-Instruct-abliterated", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 1.0 }
- { "model": "huihui-ai/QwQ-32B-Preview-abliterated", "base": "Qwen/Qwen2.5-32B", "alpha": 1.0 }
- { "model": "Qwen/QwQ-32B-Preview", "base": "Qwen/Qwen2.5-32B", "alpha": 0.8, "is_input": true }
- { "model": "rombodawg/Rombos-LLM-V2.5-Qwen-32b", "base": "Qwen/Qwen2.5-32B", "alpha": 1.0, "is_output": true }
- { "model": "nbeerbower/Qwen2.5-Gutenberg-Doppel-32B", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 0.4 }
```
In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model.
### What is this?
I started my experiment because of QwQ is a really nifty model, but it was giving me problems with xml output - which is what I use for my thought tokens. So, I thought... lets just merge it in!
The first model worked pretty well, but I got a sense that the balances could be tweaked. Why not throw in some other models as well for fun and see if I can't run out of disk space in the process?
### Initial Results
It's a little crispier than Awqward, but does generate stable output. Since it is based on Qwen2.5 base instead of instruct it did not fail the math test, it scores with models twice it's size:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/Yjln2MIh15loleJR7EpbL.png)
## How to think
This model is very compliant to steering, and has innate chain of thought, so producing nice, formatted, chain of thought results is quite easy. Below is a very simple proof-of-concept example of how to achieve a thought turn.
#### System Prompt
```
You are my assistant. You should be helpful and use <thought></thought> xml.
All responses are in xml format.
```
#### One Shot
```
<thoughts>
<thought>Example thought</thought>
<idea>I have an idea</idea>
</thoughts>
```
#### Request
```
Think through how you render 2d polys at different scales.
```
#### Response
```
<response>
<thoughts>
<thought>When rendering 2D polygons at different scales, it's important to maintain their proportions and visual quality.</thought>
<idea>One approach could be using texture atlases for efficient scaling without losing detail.</idea>
<thought>Another idea is implementing level-of-detail (LOD) techniques to adjust polygon complexity based on scale.</thought>
<idea>We might also consider using shaders to handle dynamic scaling and apply effects smoothly.</idea>
</thoughts>
</response>
```
I did notice it sometimes drops trailing tags, so you should always validate (and if you are clever, repair) any structured responses.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwentile2.5-32b-instruct,
title = {Qwentile 2.5 32B Instruct},
url = {https://huggingface.co/maldv/Qwentile2.5-32B-Instruct},
author = {Praxis Maldevide},
month = {December},
year = {2024}
}
```