Update README.md
Browse files
README.md
CHANGED
@@ -125,6 +125,12 @@ model-index:
|
|
125 |
This model was converted to GGUF format from [`Epiculous/Azure_Dusk-v0.2`](https://huggingface.co/Epiculous/Azure_Dusk-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
126 |
Refer to the [original model card](https://huggingface.co/Epiculous/Azure_Dusk-v0.2) for more details on the model.
|
127 |
|
|
|
|
|
|
|
|
|
|
|
|
|
128 |
## Use with llama.cpp
|
129 |
Install llama.cpp through brew (works on Mac and Linux)
|
130 |
|
|
|
125 |
This model was converted to GGUF format from [`Epiculous/Azure_Dusk-v0.2`](https://huggingface.co/Epiculous/Azure_Dusk-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
126 |
Refer to the [original model card](https://huggingface.co/Epiculous/Azure_Dusk-v0.2) for more details on the model.
|
127 |
|
128 |
+
---
|
129 |
+
Model details:
|
130 |
+
-
|
131 |
+
Following up on Crimson_Dawn-v0.2 we have Azure_Dusk-v0.2! Training on Mistral-Nemo-Base-2407 this time I've added significantly more data, as well as trained using RSLoRA as opposed to regular LoRA. Another key change is training on ChatML as opposed to Mistral Formatting.
|
132 |
+
|
133 |
+
---
|
134 |
## Use with llama.cpp
|
135 |
Install llama.cpp through brew (works on Mac and Linux)
|
136 |
|