mradermacher commited on
Commit
3fc844a
1 Parent(s): bf0df8a

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -3,7 +3,8 @@ base_model: nvidia/Llama-3.1-Minitron-4B-Width-Base
3
  language:
4
  - en
5
  library_name: transformers
6
- no_imatrix: "cvs/llama.cpp/ggml/src/ggml.c:6399: GGML_ASSERT(c->ne[0] >= n_dims / 2) failed"
 
7
  quantized_by: mradermacher
8
  ---
9
  ## About
@@ -16,7 +17,6 @@ quantized_by: mradermacher
16
  static quants of https://huggingface.co/nvidia/Llama-3.1-Minitron-4B-Width-Base
17
 
18
  <!-- provided-files -->
19
- weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
20
  ## Usage
21
 
22
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
 
3
  language:
4
  - en
5
  library_name: transformers
6
+ no_imatrix: 'cvs/llama.cpp/ggml/src/ggml.c:6399: GGML_ASSERT(c->ne[0] >= n_dims /
7
+ 2) failed'
8
  quantized_by: mradermacher
9
  ---
10
  ## About
 
17
  static quants of https://huggingface.co/nvidia/Llama-3.1-Minitron-4B-Width-Base
18
 
19
  <!-- provided-files -->
 
20
  ## Usage
21
 
22
  If you are unsure how to use GGUF files, refer to one of [TheBloke's