Added DeciLM-7B-instruct model with uniform GQA in GGUF format, in multiple quantization levels.
Browse filesThe follogin changes are required for the base model (7b-instruct) to work in llama.cpp codebase:
- GQA
- Uniform GQA instead of variable GQA (fixed to 4)
- ROPE
- Linear ROPE instead of dynamic ROPE scaling
- Tested on M2 with 32gb or RAM
- Reaches 12,000-13,000 tok/s
- .gitattributes +3 -0
- decilm-7b-uniform-gqa-f16.gguf +3 -0
- decilm-7b-uniform-gqa-f32.gguf +3 -0
- decilm-7b-uniform-gqa-q8_0.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
decilm-7b-uniform-gqa-f16.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
decilm-7b-uniform-gqa-f32.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
decilm-7b-uniform-gqa-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
decilm-7b-uniform-gqa-f16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c28d32a79f4dea3cf931b40237b64ce9109efd7e6c8a990a8c611286b9b2341f
|
3 |
+
size 14217232832
|
decilm-7b-uniform-gqa-f32.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b45f3e8a5e05b557c6f8c421d47f2a3226af912016f923150ae07789977ba4cc
|
3 |
+
size 28430792256
|
decilm-7b-uniform-gqa-q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1b60beae4c9d5d7131f846f7545cb386e20d8f2c613844a0142ee701537dbfe3
|
3 |
+
size 7554187712
|