davzoku commited on
Commit
c43f4f1
·
verified ·
1 Parent(s): b6a90ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -37,6 +37,8 @@ CRIA v1.3 comes with several variants.
37
  - [davzoku/cria-llama2-7b-v1.3-GGML](https://huggingface.co/davzoku/cria-llama2-7b-v1.3-GGML): Quantized Merged Model
38
  - [davzoku/cria-llama2-7b-v1.3_peft](https://huggingface.co/davzoku/cria-llama2-7b-v1.3_peft): PEFT adapter
39
  - [davzoku/cria-llama2-7b-v1.3-GGUF](https://huggingface.co/davzoku/cria-llama2-7b-v1.3-GGUF): GGUF Format
 
 
40
 
41
  This model is converted from the q4_0 GGML version of CRIA v1.3 using the [llama.cpp's convert-llama-ggml-to-gguf.py](https://github.com/ggerganov/llama.cpp/commit/f23c0359a32871947169a044eb1dc4dbffd0f405#diff-ca59e5f600b44e9c3cdfed1ffd04677524dfd4c2f43d8b4bb19fdb013e277871) script
42
 
 
37
  - [davzoku/cria-llama2-7b-v1.3-GGML](https://huggingface.co/davzoku/cria-llama2-7b-v1.3-GGML): Quantized Merged Model
38
  - [davzoku/cria-llama2-7b-v1.3_peft](https://huggingface.co/davzoku/cria-llama2-7b-v1.3_peft): PEFT adapter
39
  - [davzoku/cria-llama2-7b-v1.3-GGUF](https://huggingface.co/davzoku/cria-llama2-7b-v1.3-GGUF): GGUF Format
40
+ - converted from ggml vq4_0 using `python3 convert-llama-ggml-to-gguf.py -i ../text-generation-webui/models/cria/cria-llama2-7b-v1.3.ggmlv3.q4_0.bin -o cria-llama2-7b-v1.3.gguf`
41
+
42
 
43
  This model is converted from the q4_0 GGML version of CRIA v1.3 using the [llama.cpp's convert-llama-ggml-to-gguf.py](https://github.com/ggerganov/llama.cpp/commit/f23c0359a32871947169a044eb1dc4dbffd0f405#diff-ca59e5f600b44e9c3cdfed1ffd04677524dfd4c2f43d8b4bb19fdb013e277871) script
44