metadata
license: apache-2.0
base_model: alpindale/WizardLM-2-8x22B
SorcererLM-8x22b-bf16
Oh boy, here we go. Low-rank (r=16, alpha=32
) LoRA on top of WizardLM-2-8x22B, trained on 2 epochs of (cleaned & deduped) c2-logs. As far as I can tell, this is an upgrade from WizardLM-2-8x22B
for RP purposes.
Why A LoRA?
The choice was fully intentional. I briefly considered a FFT but for this particular use-case a LoRA seemed a better fit. WizardLM-2-8x22B
is smart by itself but its used vocabulary leaves much to be desired when it comes to RP. By training a low-rank LoRA on top of it to teach it some of Claude's writing style, we remedy that.
Prompting
- Use the templates in Quant-Cartel/Recommended-Settings under the
SorcererLM
-folder. - Or Vicuna 1.1 and a sane context template. It's somewhat sensitive to samplers, I'd recommend Temperature 1, MinP 0.05 and a dash of DRY but YMMV. Shorter prompts seem to work better, too.
Acknowledgments
- My Cartel bros, Envoid and especially I^2, for being amazing.
- My wallet for making sure I could do this without starving.
Training
Trained using qlora-pipe. Configs included in the train
-subfolder.