mradermacher commited on
Commit
76101a6
·
verified ·
1 Parent(s): b710703

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -4,7 +4,8 @@ language:
4
  - en
5
  library_name: transformers
6
  license: apache-2.0
7
- no_imatrix: "Missing importance matrix for tensor blk.0.ffn_gate_exps.weight in a very low-bit quantization"
 
8
  quantized_by: mradermacher
9
  tags:
10
  - code
@@ -24,7 +25,6 @@ tags:
24
  static quants of https://huggingface.co/nextai-team/Moe-3x7b-QA-Code-Inst
25
 
26
  <!-- provided-files -->
27
- weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
28
  ## Usage
29
 
30
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
 
4
  - en
5
  library_name: transformers
6
  license: apache-2.0
7
+ no_imatrix: Missing importance matrix for tensor blk.0.ffn_gate_exps.weight in a very
8
+ low-bit quantization
9
  quantized_by: mradermacher
10
  tags:
11
  - code
 
25
  static quants of https://huggingface.co/nextai-team/Moe-3x7b-QA-Code-Inst
26
 
27
  <!-- provided-files -->
 
28
  ## Usage
29
 
30
  If you are unsure how to use GGUF files, refer to one of [TheBloke's