Convert CodeV-QW-7B to GGUF

#281
by mfkiwl - opened

Dear Michael,

I downloaded your model CodeV-CL-7B-GGUF using LM studio , it works well; the paper of CodeV also shows that the CodeV-QW-7B performs better in some of the benchmark tests. Can I request to convert the CodeV-QW-7B to GGUF, which is really convenient for LM studio users? I attached the link of the model for your convenience: yang-z/CodeV-QW-7B

Best regards,
Xingguo

Sure, it's queued and should be done in a few hours. You can even follow its progress at http://hf.tst.eu/status.html

Cheers!

mradermacher changed discussion status to closed

Many thanks. Any planned date for uploading to Hugging Face?

Sorry, I've queued it, but somehow the job got lost. I can't explain it myself. Anyway, I queued it again. Uploads should appear immediately once it has started a sttaic or imatrix job. I'll report if something goes wrong again.

Right, the pretokeinizer is not supported by llama.cpp, i.e. support for this model is currently not implemented in llama.cpp

Since we had a lot of failed models yesterday, I must have deleted it without reporting back (and, indeed, remembering). Sorry.

Sign up or log in to comment