Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,7 @@ datasets:
|
|
17 |
- IntelligentEstate/The_Key
|
18 |
---
|
19 |
|
|
|
20 |
# fuzzy-mittenz/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-IQ4_XS-GGUF
|
21 |
This model was converted to GGUF format using "THE_KEY" dataset for importace matrix Qantization from [`WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B`](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
22 |
Refer to the [original model card](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) for more details on the model.
|
|
|
17 |
- IntelligentEstate/The_Key
|
18 |
---
|
19 |
|
20 |
+
# TEST
|
21 |
# fuzzy-mittenz/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-IQ4_XS-GGUF
|
22 |
This model was converted to GGUF format using "THE_KEY" dataset for importace matrix Qantization from [`WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B`](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
23 |
Refer to the [original model card](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) for more details on the model.
|