File size: 447 Bytes
3eba771 |
1 2 3 4 5 6 7 8 9 10 11 |
---
datasets:
- TheSkullery/Aether-Lite-PurHyDe
base_model:
- TheSkullery/LD-Zephyria-37b
---
TheSkullery/LD-Zephyria-37b, healed by training a 128 rank LoRA (targeting only the q and down projections) on one epoch of TheSkullery/Aether-Lite-PurHyDe
<p align="center"><img src="https://cdn-uploads.huggingface.co/production/uploads/633a809fa4a8f33508dce32c/kY8fwPcva1eb6ZvGeVI5T.png"/></p>
Thanks to @Steelskull for the merge recipe and dataset. |