fuzzy-mittenz commited on
Commit
9d71140
·
verified ·
1 Parent(s): 3080521

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -1,6 +1,8 @@
1
  ---
2
  license: apache-2.0
3
- base_model: WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B
 
 
4
  language:
5
  - en
6
  pipeline_tag: text-generation
@@ -11,10 +13,12 @@ tags:
11
  - finetune
12
  - llama-cpp
13
  - gguf-my-repo
 
 
14
  ---
15
 
16
  # fuzzy-mittenz/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-IQ4_XS-GGUF
17
- This model was converted to GGUF format from [`WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B`](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
  Refer to the [original model card](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) for more details on the model.
19
 
20
  ## Use with llama.cpp
@@ -55,4 +59,4 @@ Step 3: Run inference through the main binary.
55
  or
56
  ```
57
  ./llama-server --hf-repo fuzzy-mittenz/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-IQ4_XS-GGUF --hf-file whiterabbitneo-2.5-qwen-2.5-coder-7b-iq4_xs-imat.gguf -c 2048
58
- ```
 
1
  ---
2
  license: apache-2.0
3
+ base_model:
4
+ - WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B
5
+ - Qwen/Qwen2.5-Coder-7B-Instruct
6
  language:
7
  - en
8
  pipeline_tag: text-generation
 
13
  - finetune
14
  - llama-cpp
15
  - gguf-my-repo
16
+ datasets:
17
+ - IntelligentEstate/The_Key
18
  ---
19
 
20
  # fuzzy-mittenz/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-IQ4_XS-GGUF
21
+ This model was converted to GGUF format using "THE_KEY" dataset for importace matrix Qantization from [`WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B`](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
22
  Refer to the [original model card](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) for more details on the model.
23
 
24
  ## Use with llama.cpp
 
59
  or
60
  ```
61
  ./llama-server --hf-repo fuzzy-mittenz/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B-IQ4_XS-GGUF --hf-file whiterabbitneo-2.5-qwen-2.5-coder-7b-iq4_xs-imat.gguf -c 2048
62
+ ```