Update README.md
Browse files
README.md
CHANGED
@@ -15,12 +15,15 @@ without relying on the cloud.
|
|
15 |
## Performance
|
16 |
| Model | MMLU | GPQA-diamond | GSM8K | MATH-500 | IFEVAL | LIVEBENCH | HUMANEVAL | Average |
|
17 |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
18 |
-
|
|
19 |
| `Qwen3-0.6B` | 43.31 | 26.77 | 62.85 | 45.6 | 58.41 | 23.1 | 31.71 | 41.67 |
|
20 |
| `Qwen3-1.7B` | 64.19 | 27.78 | 81.88 | 63.6 | 69.50 | 35.60 | 61.59 | 57.73 |
|
21 |
| `Gemma3nE2b-it` | 63.04 | 20.2 | 82.34 | 58.6 | 73.2 | 27.90 | 64.63 | 55.70 |
|
22 |
| `Llama3.2-3B-Instruct` | 64.15 | 24.24 | 75.51 | 40 | 71.16 | 15.30 | 55.49 | 49.41 |
|
23 |
| `Llama-3.2-1B-Instruct` | 45.66 | 22.73 | 1.67 | 14.4 | 48.06 | 13.50 | 37.20 | 26.17 |
|
|
|
|
|
|
|
24 |
## Model Card
|
25 |
|
26 |
<div align="center">
|
|
|
15 |
## Performance
|
16 |
| Model | MMLU | GPQA-diamond | GSM8K | MATH-500 | IFEVAL | LIVEBENCH | HUMANEVAL | Average |
|
17 |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
18 |
+
| **`SmallThinker-4BA0.6B-Instruct`** | 66.11 | 31.31 | 80.02 | 60.60 | 69.69 | 42.20 | 82.32 | 61.75 |
|
19 |
| `Qwen3-0.6B` | 43.31 | 26.77 | 62.85 | 45.6 | 58.41 | 23.1 | 31.71 | 41.67 |
|
20 |
| `Qwen3-1.7B` | 64.19 | 27.78 | 81.88 | 63.6 | 69.50 | 35.60 | 61.59 | 57.73 |
|
21 |
| `Gemma3nE2b-it` | 63.04 | 20.2 | 82.34 | 58.6 | 73.2 | 27.90 | 64.63 | 55.70 |
|
22 |
| `Llama3.2-3B-Instruct` | 64.15 | 24.24 | 75.51 | 40 | 71.16 | 15.30 | 55.49 | 49.41 |
|
23 |
| `Llama-3.2-1B-Instruct` | 45.66 | 22.73 | 1.67 | 14.4 | 48.06 | 13.50 | 37.20 | 26.17 |
|
24 |
+
|
25 |
+
For the MMLU evaluation, we use a 0-shot CoT setting.
|
26 |
+
|
27 |
## Model Card
|
28 |
|
29 |
<div align="center">
|