update readme
Browse files
README.md
CHANGED
@@ -156,9 +156,9 @@ response, history = model.chat(tokenizer, "你好", history=None)
|
|
156 |
|
157 |
### 效果评测
|
158 |
|
159 |
-
我们对BF16和Int4
|
160 |
|
161 |
-
We illustrate the
|
162 |
|
163 |
| Quantization | MMLU | CEval (val) | GSM8K | Humaneval |
|
164 |
| ------------- | :--------: | :----------: | :----: | :--------: |
|
|
|
156 |
|
157 |
### 效果评测
|
158 |
|
159 |
+
我们对BF16和Int4模型在基准评测上做了测试(使用zero-shot设置),发现量化模型效果损失较小,结果如下所示:
|
160 |
|
161 |
+
We illustrate the zero-shot performance of both BF16 and Int4 models on the benchmark, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below:
|
162 |
|
163 |
| Quantization | MMLU | CEval (val) | GSM8K | Humaneval |
|
164 |
| ------------- | :--------: | :----------: | :----: | :--------: |
|