Description:
This is forked from IBM's granite-20b-code-base-GGUF
- commit d70433a71e2fb9e20f8bfca3ff2d8c15393f0e44
.
Refer to the original model card for more details.
Use with llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
# install
make
# run generation
./main -m granite-20b-code-base-GGUF/granite-20b-code-base.Q4_K_M.gguf -n 128 -p "def generate_random(x: int):" --color
- Downloads last month
- 33
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Datasets used to train cobrakenji/granite-20b-code-base-GGUF
Evaluation results
- pass@1 on MBPPself-reported43.800
- pass@1 on MBPP+self-reported51.600
- pass@1 on HumanEvalSynthesis(Python)self-reported48.200
- pass@1 on HumanEvalSynthesis(JavaScript)self-reported50.000
- pass@1 on HumanEvalSynthesis(Java)self-reported59.100
- pass@1 on HumanEvalSynthesis(Go)self-reported32.300
- pass@1 on HumanEvalSynthesis(C++)self-reported40.900
- pass@1 on HumanEvalSynthesis(Rust)self-reported35.400
- pass@1 on HumanEvalExplain(Python)self-reported17.100
- pass@1 on HumanEvalExplain(JavaScript)self-reported18.300