nbeerbower's picture
Update README.md
503520c verified
metadata
library_name: transformers
license: apache-2.0
base_model:
  - nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
datasets:
  - nbeerbower/Arkhaios-DPO
  - nbeerbower/Purpura-DPO

image/png

🧪 Just Another Model Experiment

This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!

Mistral-Nemo-Prism-12B-v6

Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO.

The goal was to reduce archaic language and purple prose in a completely uncensored model.

Method

ORPO tuned with 8x A40 for 10 epochs.

For this version, LoRA rank was increased to 128 from 16.