Sorrymaker2024 commited on
Commit
1671d12
·
verified ·
1 Parent(s): 181de07

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -15,11 +15,11 @@ without relying on the cloud.
15
  ## Performance
16
  | Model | MMLU | GPQA-diamond | GSM8K | MATH-500 | IFEVAL | LIVEBENCH | HUMANEVAL | Average |
17
  | :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
18
- | **SmallThinker-4BA0.6B-Instruct** | 66.11 | 31.31 | 80.02 | 60.60 | 69.69 | 42.20 | 82.32 | 61.75 |
19
  | Qwen3-0.6B | 43.31 | 26.77 | 62.85 | 45.6 | 58.41 | 23.1 | 31.71 | 41.67 |
20
- | Qwen3-1.7B | 64.19 | 27.78 | 81.88 | 63.6 | 69.50 | 35.60 | 61.59 | 57.73 |
21
- | Gemma3nE2b-it | 63.04 | 20.2 | 82.34 | 58.6 | 73.2 | 27.90 | 64.63 | 55.70 |
22
- | Llama3.2-3B-Instruct | 64.15 | 24.24 | 75.51 | 40 | 71.16 | 15.30 | 55.49 | 49.41 |
23
  | Llama-3.2-1B-Instruct | 45.66 | 22.73 | 1.67 | 14.4 | 48.06 | 13.50 | 37.20 | 26.17 |
24
 
25
  For the MMLU evaluation, we use a 0-shot CoT setting.
@@ -28,7 +28,6 @@ For the MMLU evaluation, we use a 0-shot CoT setting.
28
 
29
  <div align="center">
30
 
31
- | | |
32
  |:---:|:---:|
33
  | **Architecture** | Mixture-of-Experts (MoE) |
34
  | **Total Parameters** | 4B |
@@ -49,7 +48,7 @@ For the MMLU evaluation, we use a 0-shot CoT setting.
49
 
50
  ### Transformers
51
 
52
- The latest version of `transformers` is recommended or `transformers>=4.52.4` is required.
53
  The following contains a code snippet illustrating how to use the model generate content based on given inputs.
54
 
55
  ```python
 
15
  ## Performance
16
  | Model | MMLU | GPQA-diamond | GSM8K | MATH-500 | IFEVAL | LIVEBENCH | HUMANEVAL | Average |
17
  | :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
18
+ | **SmallThinker-4BA0.6B-Instruct** | **66.11** | **31.31** | 80.02 | <u>60.60</u> | 69.69 | **42.20** | **82.32** | **61.75** |
19
  | Qwen3-0.6B | 43.31 | 26.77 | 62.85 | 45.6 | 58.41 | 23.1 | 31.71 | 41.67 |
20
+ | Qwen3-1.7B | <u>64.19</u> | <u>27.78</u> | <u>81.88</u> | **63.6** | 69.50 | <u>35.60</u> | 61.59 | <u>57.73</u> |
21
+ | Gemma3nE2b-it | 63.04 | 20.2 | **82.34** | 58.6 | **73.2** | 27.90 | <u>64.63</u> | 55.70 |
22
+ | Llama3.2-3B-Instruct | 64.15 | 24.24 | 75.51 | 40 | <u>71.16</u> | 15.30 | 55.49 | 49.41 |
23
  | Llama-3.2-1B-Instruct | 45.66 | 22.73 | 1.67 | 14.4 | 48.06 | 13.50 | 37.20 | 26.17 |
24
 
25
  For the MMLU evaluation, we use a 0-shot CoT setting.
 
28
 
29
  <div align="center">
30
 
 
31
  |:---:|:---:|
32
  | **Architecture** | Mixture-of-Experts (MoE) |
33
  | **Total Parameters** | 4B |
 
48
 
49
  ### Transformers
50
 
51
+ The latest version of `transformers` is recommended or `transformers>=4.53.3` is required.
52
  The following contains a code snippet illustrating how to use the model generate content based on given inputs.
53
 
54
  ```python