Description:

This is forked from IBM's granite-20b-code-base-GGUF - commit d70433a71e2fb9e20f8bfca3ff2d8c15393f0e44.
Refer to the original model card for more details.

Use with llama.cpp

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

# install
make

# run generation
./main -m granite-20b-code-base-GGUF/granite-20b-code-base.Q4_K_M.gguf -n 128 -p "def generate_random(x: int):" --color
Downloads last month
33
GGUF
Model size
20.1B params
Architecture
starcoder

4-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Datasets used to train cobrakenji/granite-20b-code-base-GGUF

Evaluation results