File size: 1,209 Bytes
df7d3ae f147903 df7d3ae f147903 df7d3ae 503520c d5d0db9 df7d3ae f147903 df7d3ae f147903 df7d3ae f147903 df7d3ae f147903 df7d3ae f147903 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
---
library_name: transformers
license: apache-2.0
base_model:
- nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
datasets:
- nbeerbower/Arkhaios-DPO
- nbeerbower/Purpura-DPO
---
![image/png](https://huggingface.co/nbeerbower/Mistral-Nemo-Prism-12B/resolve/main/prism-cover.png?download=true)
> 🧪 **Just Another Model Experiment**
>
> This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!
# Mistral-Nemo-Prism-12B-v6
[Mahou-1.5-mistral-nemo-12B-lorablated](https://huggingface.co/nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated) finetuned on [Arkhaios-DPO](https://huggingface.co/datasets/nbeerbower/Arkhaios-DPO) and [Purpura-DPO](https://huggingface.co/datasets/nbeerbower/Purpura-DPO).
The goal was to reduce archaic language and purple prose in a completely uncensored model.
### Method
[ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 8x A40 for 10 epochs.
For this version, LoRA rank was increased to 128 from 16. |