fuzzy-mittenz commited on
Commit
73ec572
·
verified ·
1 Parent(s): d484173

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -8,7 +8,10 @@ tags:
8
  - gguf-my-repo
9
  ---
10
  CHATML TEMP
 
 
11
  # fuzzy-mittenz/Llama-3.1-Minitron-4B-Width-Base-chatml-IQ4_NL-GGUF
 
12
  This model was converted to GGUF format from [`IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml`](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml) using llama.cpp
13
  Refer to the [original model card](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml) for more details on the model.
14
 
 
8
  - gguf-my-repo
9
  ---
10
  CHATML TEMP
11
+
12
+
13
  # fuzzy-mittenz/Llama-3.1-Minitron-4B-Width-Base-chatml-IQ4_NL-GGUF
14
+ ## used fluently-sets/reasoning-1-1k-demo dataset trasition QAT
15
  This model was converted to GGUF format from [`IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml`](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml) using llama.cpp
16
  Refer to the [original model card](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml) for more details on the model.
17