2Jyq/llm4decompile-9b-v2-GGUF

This model was converted to GGUF format from LLM4Binary/llm4decompile-9b-v2 using llama.cpp. Refer to the original model card for more details on the model.

Downloads last month
16
GGUF
Model size
8.83B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for 2Jyq/llm4decompile-9b-v2-GGUF

Quantized
(5)
this model