fuzzy-mittenz commited on
Commit
d484173
·
verified ·
1 Parent(s): f12175f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,9 +7,9 @@ tags:
7
  - llama-cpp
8
  - gguf-my-repo
9
  ---
10
-
11
  # fuzzy-mittenz/Llama-3.1-Minitron-4B-Width-Base-chatml-IQ4_NL-GGUF
12
- This model was converted to GGUF format from [`IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml`](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml) for more details on the model.
14
 
15
  ## Use with llama.cpp
 
7
  - llama-cpp
8
  - gguf-my-repo
9
  ---
10
+ CHATML TEMP
11
  # fuzzy-mittenz/Llama-3.1-Minitron-4B-Width-Base-chatml-IQ4_NL-GGUF
12
+ This model was converted to GGUF format from [`IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml`](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml) using llama.cpp
13
  Refer to the [original model card](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml) for more details on the model.
14
 
15
  ## Use with llama.cpp