legraphista's picture
Upload README.md with huggingface_hub
c1bcad2 verified
---
base_model: ibm-granite/granite-20b-code-instruct
datasets:
- bigcode/commitpackft
- TIGER-Lab/MathInstruct
- meta-math/MetaMathQA
- glaiveai/glaive-code-assistant-v3
- glaive-function-calling-v2
- bugdaryan/sql-create-context-instruction
- garage-bAInd/Open-Platypus
- nvidia/HelpSteer
inference: false
library_name: gguf
license: apache-2.0
metrics:
- code_eval
model-index:
- name: granite-20b-code-instruct
results:
- dataset:
name: HumanEvalSynthesis(Python)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 60.4
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalSynthesis(JavaScript)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 53.7
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalSynthesis(Java)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 58.5
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalSynthesis(Go)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 42.1
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalSynthesis(C++)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 45.7
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalSynthesis(Rust)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 42.7
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalExplain(Python)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 44.5
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalExplain(JavaScript)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 42.7
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalExplain(Java)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 49.4
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalExplain(Go)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 32.3
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalExplain(C++)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 42.1
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalExplain(Rust)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 18.3
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalFix(Python)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 43.9
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalFix(JavaScript)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 43.9
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalFix(Java)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 45.7
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalFix(Go)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 41.5
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalFix(C++)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 41.5
veriefied: false
task:
type: text-generation
- dataset:
name: HumanEvalFix(Rust)
type: bigcode/humanevalpack
metrics:
- name: pass@1
type: pass@1
value: 29.9
veriefied: false
task:
type: text-generation
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- code
- granite
- quantized
- GGUF
- quantization
- imat
- imatrix
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
- 1bit
---
# granite-20b-code-instruct-IMat-GGUF
_Llama.cpp imatrix quantization of ibm-granite/granite-20b-code-instruct_
Original Model: [ibm-granite/granite-20b-code-instruct](https://huggingface.co/ibm-granite/granite-20b-code-instruct)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3649](https://github.com/ggerganov/llama.cpp/releases/tag/b3649)
IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Simple chat template](#simple-chat-template)
- [Chat template with system prompt](#chat-template-with-system-prompt)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: βœ… Available
Link: [here](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [granite-20b-code-instruct.Q8_0.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q8_0.gguf) | Q8_0 | 21.48GB | βœ… Available | βšͺ Static | πŸ“¦ No
| [granite-20b-code-instruct.Q6_K.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q6_K.gguf) | Q6_K | 16.63GB | βœ… Available | βšͺ Static | πŸ“¦ No
| [granite-20b-code-instruct.Q4_K.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q4_K.gguf) | Q4_K | 12.82GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.Q3_K.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q3_K.gguf) | Q3_K | 10.57GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.Q2_K.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q2_K.gguf) | Q2_K | 7.93GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [granite-20b-code-instruct.BF16.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.BF16.gguf) | BF16 | 40.24GB | βœ… Available | βšͺ Static | πŸ“¦ No
| [granite-20b-code-instruct.FP16.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.FP16.gguf) | F16 | 40.24GB | βœ… Available | βšͺ Static | πŸ“¦ No
| [granite-20b-code-instruct.Q8_0.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q8_0.gguf) | Q8_0 | 21.48GB | βœ… Available | βšͺ Static | πŸ“¦ No
| [granite-20b-code-instruct.Q6_K.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q6_K.gguf) | Q6_K | 16.63GB | βœ… Available | βšͺ Static | πŸ“¦ No
| [granite-20b-code-instruct.Q5_K.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q5_K.gguf) | Q5_K | 14.81GB | βœ… Available | βšͺ Static | πŸ“¦ No
| [granite-20b-code-instruct.Q5_K_S.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q5_K_S.gguf) | Q5_K_S | 14.02GB | βœ… Available | βšͺ Static | πŸ“¦ No
| [granite-20b-code-instruct.Q4_K.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q4_K.gguf) | Q4_K | 12.82GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.Q4_K_S.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q4_K_S.gguf) | Q4_K_S | 11.67GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.IQ4_NL.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.IQ4_NL.gguf) | IQ4_NL | 11.55GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.IQ4_XS.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.IQ4_XS.gguf) | IQ4_XS | 10.94GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.Q3_K.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q3_K.gguf) | Q3_K | 10.57GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q3_K_L.gguf) | Q3_K_L | 11.74GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q3_K_S.gguf) | Q3_K_S | 8.93GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.IQ3_M.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.IQ3_M.gguf) | IQ3_M | 9.59GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.IQ3_S.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.IQ3_S.gguf) | IQ3_S | 8.93GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.IQ3_XS.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.IQ3_XS.gguf) | IQ3_XS | 8.66GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.IQ3_XXS.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.IQ3_XXS.gguf) | IQ3_XXS | 8.06GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.Q2_K.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q2_K.gguf) | Q2_K | 7.93GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.Q2_K_S.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.Q2_K_S.gguf) | Q2_K_S | 7.15GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.IQ2_M.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.IQ2_M.gguf) | IQ2_M | 7.05GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.IQ2_S.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.IQ2_S.gguf) | IQ2_S | 6.53GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.IQ2_XS.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.IQ2_XS.gguf) | IQ2_XS | 6.16GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.IQ2_XXS.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.IQ2_XXS.gguf) | IQ2_XXS | 5.57GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.IQ1_M.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.IQ1_M.gguf) | IQ1_M | 4.91GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
| [granite-20b-code-instruct.IQ1_S.gguf](https://huggingface.co/legraphista/granite-20b-code-instruct-IMat-GGUF/blob/main/granite-20b-code-instruct.IQ1_S.gguf) | IQ1_S | 4.52GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/granite-20b-code-instruct-IMat-GGUF --include "granite-20b-code-instruct.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/granite-20b-code-instruct-IMat-GGUF --include "granite-20b-code-instruct.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Simple chat template
```
Question:
{user_prompt}
Answer:
{assistant_response}
Question:
{next_user_prompt}
```
### Chat template with system prompt
```
System:
{system_prompt}
Question:
{user_prompt}
Answer:
{assistant_response}
Question:
{next_user_prompt}
```
### Llama.cpp
```
llama.cpp/main -m granite-20b-code-instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `granite-20b-code-instruct.Q8_0`)
3. Run `gguf-split --merge granite-20b-code-instruct.Q8_0/granite-20b-code-instruct.Q8_0-00001-of-XXXXX.gguf granite-20b-code-instruct.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!