InferenceIllusionist
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ license: apache-2.0
|
|
23 |
# Mistral-Nemo-Instruct-12B-iMat-GGUF
|
24 |
|
25 |
> [!WARNING]
|
26 |
-
><b>Important Note:</b> Inferencing in llama.cpp has now been merged in [PR #8604](https://github.com/ggerganov/llama.cpp/pull/8604). Please ensure you are on release [b3438](https://github.com/ggerganov/llama.cpp/releases/tag/b3438) or newer. Text-generation-web-ui (Ooba) is also working as of 7/23.
|
27 |
|
28 |
Quantized from Mistral-Nemo-Instruct-2407 fp16
|
29 |
* Weighted quantizations were creating using fp16 GGUF and groups_merged.txt in 92 chunks and n_ctx=512
|
|
|
23 |
# Mistral-Nemo-Instruct-12B-iMat-GGUF
|
24 |
|
25 |
> [!WARNING]
|
26 |
+
><b>Important Note:</b> Inferencing in llama.cpp has now been merged in [PR #8604](https://github.com/ggerganov/llama.cpp/pull/8604). Please ensure you are on release [b3438](https://github.com/ggerganov/llama.cpp/releases/tag/b3438) or newer. Text-generation-web-ui (Ooba) is also working as of 7/23. Kobold.cpp working as of [v1.71](https://github.com/LostRuins/koboldcpp/releases/tag/v1.71). </b>
|
27 |
|
28 |
Quantized from Mistral-Nemo-Instruct-2407 fp16
|
29 |
* Weighted quantizations were creating using fp16 GGUF and groups_merged.txt in 92 chunks and n_ctx=512
|