File size: 1,482 Bytes
3a1f95c 1a2f93e 3a1f95c bdb9c06 3a1f95c bdb9c06 74baf19 05da3ed bdb9c06 1e04c9d 55d3239 1e04c9d f395d1a 1e04c9d bdb9c06 3a1f95c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
---
license: apache-2.0
tags:
- merge
- roleplay
- exl2
- not-for-all-audiences
---
# RP-Stew-v4.0-34B
Base model:
https://huggingface.co/ParasiticRogue/RP-Stew-v4.0-34B
Parquet used (Bluemoon-Light/Chat-Vicuna-1.1) for quantization:
https://huggingface.co/datasets/ParasiticRogue/Bluemoon-Light
Another experimental/testing merge and quant to try and increase Stew's capabilities, but with some slight alterations in models used, and this one actually seems to show a bit more promise than v3 with the brief tests done so far.
trust-remote-code must be turned on for this version still due to the base model being Capybara, but I'll look into fixing this later if it performs comparably to v2 or better during further testing.
## Settings
Temperature @ 0.95
Min-P @ 0.1
Smoothing Factor @ 0.3
DRY Multiplier (plus standard DRY settings) @ 0.8
Skip Special Tokens @ On
Everything else @ Off
### Prompt Format: Chat-Vicuna-1.1
```
SYSTEM: {system_prompt}<|end|>
USER: {prompt}<|end|>
ASSISTANT: {output}<|end|>
```
### Models Merged
The following models were included in the merge:
https://huggingface.co/NousResearch/Nous-Capybara-34B
https://huggingface.co/migtissera/Tess-2.0-Yi-34B-200K
https://huggingface.co/jondurbin/bagel-dpo-34b-v0.5
https://huggingface.co/maywell/PiVoT-SUS-RP
https://huggingface.co/Sao10K/NyakuraV2-34B-Yi-Llama
https://huggingface.co/NeverSleep/CausalLM-RP-34B
https://huggingface.co/adamo1139/Yi-34b-200K-AEZAKMI-RAW-TOXIC-2702
|