β οΈ Experimental Model - Pre-Alpha Warning
Please note that ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion is currently in Pre-Alpha and under active revision. As such, some features and functionalities may not perform as expected, and the model is still in the experimental phase. We are continuously refining the architecture, and future updates will improve performance and stability.
Known Issues:
- The quantized versions of this model may produce random tokens and exhibit unstable behavior.
- Further revisions are in progress to ensure better grammatical coherence and sentence generation.
ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion
ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion is a cutting-edge merged model that combines the finest features from instruction-following, coding, mathematical reasoning, and factual question-answering. This powerhouse is designed for high performance in diverse technical, creative, and interactive tasks.
π Family Tree
This model is the fusion of the following:
- cyixiao/qwen-1.5B-openbookqa
- unsloth/Qwen2.5-Coder-1.5B-Instruct
- Qwen/Qwen2.5-Math-1.5B-Instruct
- bunnycore/Qwen2.5-1.5B-Matrix
- Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini
- Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3
These models have been seamlessly blended to create a versatile AI that excels across multiple domains.
𧬠Detailed Model Lineage
A: cyixiao/qwen-1.5B-openbookqa
- Focuses on factual knowledge and reasoning from the OpenBookQA dataset, providing strong question-answering capabilities.
B: unsloth/Qwen2.5-Coder-1.5B-Instruct
- Tailored for coding and instruction-following, this model enhances the ability to generate code and follow precise instructions with ease.
C: Qwen/Qwen2.5-Math-1.5B-Instruct
- This model specializes in mathematical reasoning and logical problem-solving, making it perfect for structured tasks that require high-level thinking.
D: bunnycore/Qwen2.5-1.5B-Matrix
- A multi-purpose model that blends instruction, math, and coding, providing a well-rounded performance in both structured and creative tasks.
E: Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini
- Fine-tuned on conversational and identity-specific tasks, this model contributes to the modelβs ability to handle conversation-heavy tasks with clarity.
F: Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3
- This model brings uncensored capabilities, ensuring that the AI is flexible and adaptable in open-ended and unrestricted instruction-following scenarios.
π οΈ Merge Details
The model was merged using the DELLA merge method with bfloat16 precision, ensuring high-performance across multiple task types. Here's the configuration used for the merge:
merge_method: della
dtype: bfloat16
parameters:
epsilon: 0.1
lambda: 1.0
normalize: true
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
models:
- model: cyixiao/qwen-1.5B-openbookqa
parameters:
weight: 1
density: 0.5
- model: unsloth/Qwen2.5-Coder-1.5B-Instruct
parameters:
weight: 1
density: 0.6
- model: Qwen/Qwen2.5-Math-1.5B-Instruct
parameters:
weight: 1
density: 0.55
- model: bunnycore/Qwen2.5-1.5B-Matrix
parameters:
weight: 1
density: 0.55
- model: Syed-Hasan-8503/Qwen2.5-1.5B-Instruct-WO-Adam-mini
parameters:
weight: 1
density: 0.45
- model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3
parameters:
weight: 1
density: 0.5
π― Key Features & Capabilities
1. Coding and Instruction Following:
This model excels in technical coding tasks thanks to the contributions from Qwen2.5-Coder and Matrix.
2. Mathematical Reasoning:
With Qwen2.5-Math-1.5B-Instruct, the model is perfect for solving complex mathematical problems and structured logical tasks.
3. Conversational Abilities:
Fine-tuned on conversation and identity tasks, the model handles complex dialogue and conversational exchanges with Syed-Hasan-8503.
4. Uncensored Versatility:
Thanks to Josiefied-Qwen2.5, this model can operate without restrictions, making it ideal for open-ended instruction-following.
π License
This model is open-sourced under the Apache-2.0 License, allowing others to use and modify it freely, as long as they give proper attribution.
π‘ Tags
merge
Qwen
Coder
Math
Bunnycore
instruction-following
long-form-generation
- Downloads last month
- 7
Model tree for ZeroXClem/Qwen2.5-1.5B-Instruct-Coder-Math-Bunnycore-Fusion
Base model
Qwen/Qwen2.5-1.5B