What 2bit quantization approach are you using?
1
#13 opened about 1 year ago
by
ingridstevens

Not able to use with ctransformers
1
#12 opened about 1 year ago
by
aravindsr
NousCapybara 34b
#11 opened about 1 year ago
by
Alastar-Smith
code-llama-70b
#10 opened about 1 year ago
by
eramax

What -ctx and -chunks parameters did you use to make the iMatrix of the Lllama 2 70b?
1
#9 opened about 1 year ago
by
Nexesenex

Quantize these amazing models
#8 opened about 1 year ago
by
deleted
mixtral-instruct-8x7b for Q2KS as well
#7 opened about 1 year ago
by
shing3232
Would love a deepseekcode 2bit quant. I bet others would love it too :)
2
#6 opened about 1 year ago
by
subiculumforge
[Model request] Saily 100b, Saily 220b
1
#5 opened about 1 year ago
by
Perpetuity7
Could We combine AWQ and Importance Matrix calculation together to further improve perplexity.
3
#4 opened about 1 year ago
by
shing3232
[Model Request] cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
#3 opened about 1 year ago
by
Joseph717171
Magic Issues with nous-hermes-2-34b-2.16bpw.gguf (Log Attached...)
#2 opened about 1 year ago
by
Joseph717171
Please 3b model rocket 2bit?
17
#1 opened about 1 year ago
by
Shqmil