license: apache-2.0 | |
library_name: transformers | |
base_model: | |
- Sao10K/MN-12B-Lyra-v4 | |
datasets: | |
- jondurbin/gutenberg-dpo-v0.1 | |
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) | |
# QuantFactory/Lyra4-Gutenberg-12B-GGUF | |
This is quantized version of [nbeerbower/Lyra4-Gutenberg-12B](https://huggingface.co/nbeerbower/Lyra4-Gutenberg-12B) created using llama.cpp | |
# Original Model Card | |
# Lyra4-Gutenberg-12B | |
[Sao10K/MN-12B-Lyra-v4](https://huggingface.co/Sao10K/MN-12B-Lyra-v4) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1). | |
### Method | |
ORPO Finetuned using an RTX 3090 + 4060 Ti for 3 epochs. | |
[Fine-tune Llama 3 with ORPO](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) | |