Edit model card

Description:

This is forked from IBM's granite-20b-code-base-GGUF - commit d70433a71e2fb9e20f8bfca3ff2d8c15393f0e44.
Refer to the original model card for more details.

Use with llama.cpp

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

# install
make

# run generation
./main -m granite-20b-code-base-GGUF/granite-20b-code-base.Q4_K_M.gguf -n 128 -p "def generate_random(x: int):" --color
Downloads last month
34
GGUF
Model size
20.1B params
Architecture
starcoder

4-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train cobrakenji/granite-20b-code-base-GGUF

Evaluation results