Update README.md
Browse files
README.md
CHANGED
@@ -2,14 +2,30 @@
|
|
2 |
pipeline_tag: text-generation
|
3 |
inference: false
|
4 |
license: apache-2.0
|
5 |
-
library_name:
|
6 |
tags:
|
7 |
- language
|
8 |
- granite-3.2
|
9 |
base_model:
|
10 |
-
- ibm-granite/granite-3.
|
11 |
---
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
# Granite-3.2-2B-Instruct
|
14 |
|
15 |
**Model Summary:**
|
|
|
2 |
pipeline_tag: text-generation
|
3 |
inference: false
|
4 |
license: apache-2.0
|
5 |
+
library_name: exllamav2
|
6 |
tags:
|
7 |
- language
|
8 |
- granite-3.2
|
9 |
base_model:
|
10 |
+
- ibm-granite/granite-3.2-2b-instruct
|
11 |
---
|
12 |
+
# Granite-3.2-2B-Instruct-exl2
|
13 |
+
Original model: [granite-3.2-2b-instruct](https://huggingface.co/ibm-granite/granite-3.2-2b-instruct)
|
14 |
+
Created by: [Granite Team, IBM](https://huggingface.co/ibm-granite)
|
15 |
+
|
16 |
+
## Quants
|
17 |
+
[4bpw h6 (main)](https://huggingface.co/cgus/granite-3.2-2b-instruct-exl2/tree/main)
|
18 |
+
[4.5bpw h6](https://huggingface.co/cgus/granite-3.2-2b-instruct-exl2/tree/4.5bpw-h6)
|
19 |
+
[5bpw h6](https://huggingface.co/cgus/granite-3.2-2b-instruct-exl2/tree/5bpw-h6)
|
20 |
+
[6bpw h6](https://huggingface.co/cgus/granite-3.2-2b-instruct-exl2/tree/6bpw-h6)
|
21 |
+
[8bpw h8](https://huggingface.co/cgus/granite-3.2-2b-instruct-exl2/tree/8bpw-h8)
|
22 |
+
|
23 |
+
## Quantization notes
|
24 |
+
Made with Exllamav2 0.2.8 with the default dataset. Granite3 exl2 models require Exllamav2 0.2.7 or newer.
|
25 |
+
Exl2 models have to be fully loaded into GPU VRAM, native RAM offloading isn't supported.
|
26 |
+
These models require Nvidia RTX on Windows or Nvidia RTX/AMD ROCm on Linux.
|
27 |
+
|
28 |
+
# Original model card
|
29 |
# Granite-3.2-2B-Instruct
|
30 |
|
31 |
**Model Summary:**
|