Rose-2x7B-GGUF / README.md
heyuan's picture
Upload folder using huggingface_hub
798fa33 verified
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- uproai/ros-7b-v1
- WizardLM/WizardMath-7B-V1.1
base_model:
- uproai/ros-7b-v1
- WizardLM/WizardMath-7B-V1.1
---
# Rose-2x7B-GGUF
Rose-2x7B-GGUF is GGUF version of [Rose-2x7B](https://huggingface.co/uproai/Rose-2x7B) which is a Mixure of Experts (MoE) made with the following models using [Mergekit](https://github.com/cg123/mergekit):
* [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
## 🧩 Configuration
```yaml
base_model: uproai/ros-7b-v1
experts:
- source_model: maywell/PiVoT-0.1-Starling-LM-RP
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- source_model: WizardLM/WizardMath-7B-V1.1
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
tokenizer_source: union
```