Can you please create weighted/imatrix quant versions?
#1
by
t1u1
- opened
Similar to https://huggingface.co/mradermacher/Lamarck-14B-v0.6-i1-GGUF
They seem to work faster. However I find phi4-GGUF to give better results. It would be nice to use imatrix versions if possible.
Thanks