mradermacher commited on
Commit
ab79cbf
1 Parent(s): f663dd2

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -5,7 +5,7 @@ language:
5
  library_name: transformers
6
  license: apache-2.0
7
  license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct/blob/main/LICENSE
8
- no_imatrix: "nan detected in blk.47.attn_q.weight"
9
  quantized_by: mradermacher
10
  tags:
11
  - code
@@ -24,7 +24,6 @@ tags:
24
  static quants of https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct
25
 
26
  <!-- provided-files -->
27
- weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
28
  ## Usage
29
 
30
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
 
5
  library_name: transformers
6
  license: apache-2.0
7
  license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct/blob/main/LICENSE
8
+ no_imatrix: nan detected in blk.47.attn_q.weight
9
  quantized_by: mradermacher
10
  tags:
11
  - code
 
24
  static quants of https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct
25
 
26
  <!-- provided-files -->
 
27
  ## Usage
28
 
29
  If you are unsure how to use GGUF files, refer to one of [TheBloke's