Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ The models are released under the Jamba Open Model License, a permissive license
|
|
11 |
|
12 |
If you need to license the model for your needs, talk to us.
|
13 |
|
14 |
-
For more details of this model, see the release blog post.
|
15 |
## Model Details
|
16 |
|
17 |
- **Developed by:** [AI21](https://www.ai21.com)
|
@@ -21,7 +21,17 @@ For more details of this model, see the release blog post.
|
|
21 |
- **Knowledge cutoff date:** March 5, 2024
|
22 |
- **Supported languages:** English, Spanish, French, Portuguese, Italian, Dutch, German, Arabic and Hebrew
|
23 |
|
|
|
24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
# Usage
|
26 |
## Prerequisites
|
27 |
|
|
|
11 |
|
12 |
If you need to license the model for your needs, talk to us.
|
13 |
|
14 |
+
For more details of this model, see the release [blog post](https://www.ai21.com/blog/introducing-jamba-1-6).
|
15 |
## Model Details
|
16 |
|
17 |
- **Developed by:** [AI21](https://www.ai21.com)
|
|
|
21 |
- **Knowledge cutoff date:** March 5, 2024
|
22 |
- **Supported languages:** English, Spanish, French, Portuguese, Italian, Dutch, German, Arabic and Hebrew
|
23 |
|
24 |
+
## Results on common benchmarks
|
25 |
|
26 |
+
| Benchmark | Jamba Mini 1.6 | Ministral 8B | Llama 3.1 8B | Command R7B |
|
27 |
+
|--------------|:-----:|:-----:|:-----:|:-----:|
|
28 |
+
| Arena Hard | 51.2| 41.35| 28.17| 27.95|
|
29 |
+
| CRAG | 76.2| 52| 60| 23.1|
|
30 |
+
| FinanceBench (FullDoc) | 45.4 | 19.2 | 28.4 | 2.8|
|
31 |
+
| HELMET LongQA | 46.9 | 33 | 29.2| 9.6|
|
32 |
+
| LongBench | 32 | 17.5 | 17.7 | 2|
|
33 |
+
|
34 |
+
LongBench and Arena Hard scores are from official leaderboards for applicable models. Examples that couldn't fit models' context windows were scored accordingly. Due to a 32K context limit in its vLLM deployment, Ministral 8B was evaluated through its official API.
|
35 |
# Usage
|
36 |
## Prerequisites
|
37 |
|