File size: 981 Bytes
798fa33 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- uproai/ros-7b-v1
- WizardLM/WizardMath-7B-V1.1
base_model:
- uproai/ros-7b-v1
- WizardLM/WizardMath-7B-V1.1
---
# Rose-2x7B-GGUF
Rose-2x7B-GGUF is GGUF version of [Rose-2x7B](https://huggingface.co/uproai/Rose-2x7B) which is a Mixure of Experts (MoE) made with the following models using [Mergekit](https://github.com/cg123/mergekit):
* [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
## 🧩 Configuration
```yaml
base_model: uproai/ros-7b-v1
experts:
- source_model: maywell/PiVoT-0.1-Starling-LM-RP
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- source_model: WizardLM/WizardMath-7B-V1.1
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
tokenizer_source: union
```
|