Update README.md
Browse files
README.md
CHANGED
@@ -35,6 +35,7 @@ For the MMLU evaluation, we use a 0-shot CoT setting.
|
|
35 |
| Gemma 3n E4B | 2G, theoretically | 21.93 | 16.58 | 7.37 | 4.01 |
|
36 |
|
37 |
Note:i9 14900、1+13 8ge4 use 4 threads,others use the number of threads that can achieve the maximum speed. All models here have been quantized to q4_0.
|
|
|
38 |
|
39 |
## Model Card
|
40 |
|
|
|
35 |
| Gemma 3n E4B | 2G, theoretically | 21.93 | 16.58 | 7.37 | 4.01 |
|
36 |
|
37 |
Note:i9 14900、1+13 8ge4 use 4 threads,others use the number of threads that can achieve the maximum speed. All models here have been quantized to q4_0.
|
38 |
+
You can deploy SmallThinker with offloading support using [PowerInfer](https://github.com/SJTU-IPADS/PowerInfer/tree/main/smallthinker)
|
39 |
|
40 |
## Model Card
|
41 |
|