fuzzy-mittenz commited on
Commit
89b71f4
·
verified ·
1 Parent(s): 32d9fc1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -17,7 +17,7 @@ datasets:
17
  - IntelligentEstate/The_Key
18
  ---
19
  # IntelligentEstate/o3-ReasoningRabbit_Q2.5-Cd-7B-IQ4_XS-GGUF
20
- This model is developed as a unique blend of inference and ability of coding on par with the new VL models of its size, (Also only *Thinking* model without alignment) and primarily for use with GPT4ALL It excells in other applications and has reasoning capabilities(similar to QwQ/o1/03) inside it's interface with a unique javascript Tool Call function It was converted to GGUF format using "THE_KEY" dataset for importace matrix Qantization from [`WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B`](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) using llama.cpp
21
  Refer to the [original model card](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) for more details on the model.
22
 
23
 
 
17
  - IntelligentEstate/The_Key
18
  ---
19
  # IntelligentEstate/o3-ReasoningRabbit_Q2.5-Cd-7B-IQ4_XS-GGUF
20
+ This model is developed as a unique blend of inference and ability of coding on par with the new VL models of its size, (Also ONLY *Thinking* model without alignment) and primarily for use with GPT4ALL It excells in other applications and has reasoning capabilities(similar to QwQ/o1/03) inside it's interface with a unique javascript Tool Call function It was converted to GGUF format using "THE_KEY" dataset for importace matrix Qantization from [`WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B`](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) using llama.cpp
21
  Refer to the [original model card](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) for more details on the model.
22
 
23