license: mit | |
base_model: moreh/MoMo-72B-lora-1.8.7-DPO | |
 | |
 | |
This model is a finetune of moreh's [MoMo-72B](https://huggingface.co/moreh/MoMo-72B-lora-1.8.7-DPO) model. | |
It has been trained with new datasets and a new technique, which we will share to the community soon. | |
This model has not utilised any form of merging. | |
### Evaluation Results | |
Coming soon. | |
### Contamination Results | |
Coming soon. |