qanthony-z
commited on
Commit
•
b74422a
1
Parent(s):
58f0676
Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ license: apache-2.0
|
|
5 |
|
6 |
Zamba2-2.7B-Instruct is obtained from [Zamba2-2.7B](https://huggingface.co/Zyphra/Zamba2-2.7B) by fine-tuning on instruction-following and chat datasets.
|
7 |
|
8 |
-
Zamba2-2.7B-Instruct is a hybrid model composed of state-space and transformer blocks. It is based on the [Zamba2-2.7B](https://huggingface.co/Zyphra/Zamba2-2.7B) architecture.
|
9 |
|
10 |
## Quick start
|
11 |
|
@@ -18,7 +18,7 @@ To download Zamba2-2.7B-instruct, clone Zyphra's fork of transformers:
|
|
18 |
4. `pip install accelerate`
|
19 |
|
20 |
|
21 |
-
You can run the model without using the optimized
|
22 |
|
23 |
To run on CPU, please specify `use_mamba_kernels=False` when loading the model using ``AutoModelForCausalLM.from_pretrained``.
|
24 |
|
@@ -54,7 +54,7 @@ Zamba2-2.7B-Instruct punches dramatically above its weight, achieving extremely
|
|
54 |
|
55 |
| Model | Size | MT-Bench | IFEval |
|
56 |
|-------------|----|----|----|
|
57 |
-
| **Zamba2-2.
|
58 |
| Mistral-7B-Instruct | 7B | 66.4 | 45.3 |
|
59 |
| Gemma2-2B-Instruct | 2.7B | 51.69 | 42.20 |
|
60 |
| H2O-Danube-4B-Chat | 4B | 52.57 | 37.96 |
|
@@ -76,7 +76,7 @@ Zamba2-2.7B-Instruct's high performance, strong instruction-following and reason
|
|
76 |
|
77 |
## Model Details
|
78 |
|
79 |
-
Zamba2-2.7B-Instruct utilizes and extends our original Zamba hybrid SSM-attention architecture. The core Zamba architecture consists of a backbone of
|
80 |
|
81 |
<center>
|
82 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/XrEIEBxd0fqIgh3LyArAV.png" width="300" alt="Zamba architecture">
|
|
|
5 |
|
6 |
Zamba2-2.7B-Instruct is obtained from [Zamba2-2.7B](https://huggingface.co/Zyphra/Zamba2-2.7B) by fine-tuning on instruction-following and chat datasets.
|
7 |
|
8 |
+
Zamba2-2.7B-Instruct is a hybrid model composed of state-space ([Mamba2](https://github.com/state-spaces/mamba)) and transformer blocks. It is based on the [Zamba2-2.7B](https://huggingface.co/Zyphra/Zamba2-2.7B) architecture.
|
9 |
|
10 |
## Quick start
|
11 |
|
|
|
18 |
4. `pip install accelerate`
|
19 |
|
20 |
|
21 |
+
You can run the model without using the optimized Mamba2 kernels, but it is **not** recommended as it will result in significantly higher latency and memory usage.
|
22 |
|
23 |
To run on CPU, please specify `use_mamba_kernels=False` when loading the model using ``AutoModelForCausalLM.from_pretrained``.
|
24 |
|
|
|
54 |
|
55 |
| Model | Size | MT-Bench | IFEval |
|
56 |
|-------------|----|----|----|
|
57 |
+
| **Zamba2-2.7B-Instruct** | 2.7B | **72.40** | **48.02** |
|
58 |
| Mistral-7B-Instruct | 7B | 66.4 | 45.3 |
|
59 |
| Gemma2-2B-Instruct | 2.7B | 51.69 | 42.20 |
|
60 |
| H2O-Danube-4B-Chat | 4B | 52.57 | 37.96 |
|
|
|
76 |
|
77 |
## Model Details
|
78 |
|
79 |
+
Zamba2-2.7B-Instruct utilizes and extends our original Zamba hybrid SSM-attention architecture. The core Zamba architecture consists of a backbone of Mamba2 layers interleaved with one or more shared attention layers (one shared attention in Zamba1, two in Zamba2). This attention has shared weights to minimize the parameter cost of the model. We find that concatenating the original model embeddings to the input to this attention block improves performance, likely due to better maintenance of information across depth. The Zamba2 architecture also applies LoRA projection matrices to the shared MLP to gain some additional expressivity in each block and allow each shared block to specialize slightly to its own unique position while keeping the additional parameter overhead small.
|
80 |
|
81 |
<center>
|
82 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/XrEIEBxd0fqIgh3LyArAV.png" width="300" alt="Zamba architecture">
|