Edit model card

eformat/granite-3.0-8b-instruct-Q4_K_M-GGUF

Not all tools (vllm, llama.cpp) seem to support the new model config params it seems (25/10/2024).

# config.json
"model_type": "granite"
"architectures": [
  "GraniteForCausalLM"
]

This gguf conversion done using old ones

# config.json
"model_type": "llama"
"architectures": [
  "LlamaForCausalLM"
]

This gguf loads OK - tested using:

# llama.cpp
./llama-server --verbose --gpu-layers 99999 --parallel 2 --ctx-size 4096 -m ~/instructlab/models/granite-3.0-8b-instruct-Q4_K_M.gguf
# vllm
vllm serve ~/instructlab/models/granite-3.0-8b-instruct-Q4_K_M.gguf
Downloads last month
18
GGUF
Model size
8.17B params
Architecture
llama

4-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for eformat/granite-3.0-8b-instruct-Q4_K_M-GGUF

Quantized
(16)
this model