yi-01-ai
commited on
Commit
•
eefd5b2
1
Parent(s):
ae3b124
Auto Sync from git://github.com/01-ai/Yi.git/commit/c8505e56959313f3793286ee8bfe7e4ebf16031f
Browse files
README.md
CHANGED
@@ -87,7 +87,7 @@ developers at [01.AI](https://01.ai/).
|
|
87 |
<details open>
|
88 |
<summary>🎯 <b>2023/11/23</b>: The chat models are open to public.</summary>
|
89 |
|
90 |
-
This release contains two chat models based on previous released base models, two 8-bits models
|
91 |
|
92 |
- `Yi-34B-Chat`
|
93 |
- `Yi-34B-Chat-4bits`
|
@@ -392,7 +392,7 @@ python quantization/awq/quant_autoawq.py \
|
|
392 |
--trust_remote_code
|
393 |
```
|
394 |
|
395 |
-
Once finished, you can then evaluate the
|
396 |
|
397 |
```bash
|
398 |
python quantization/awq/eval_quantized_model.py \
|
|
|
87 |
<details open>
|
88 |
<summary>🎯 <b>2023/11/23</b>: The chat models are open to public.</summary>
|
89 |
|
90 |
+
This release contains two chat models based on previous released base models, two 8-bits models quantized by GPTQ, two 4-bits models quantized by AWQ.
|
91 |
|
92 |
- `Yi-34B-Chat`
|
93 |
- `Yi-34B-Chat-4bits`
|
|
|
392 |
--trust_remote_code
|
393 |
```
|
394 |
|
395 |
+
Once finished, you can then evaluate the resulting model as follows:
|
396 |
|
397 |
```bash
|
398 |
python quantization/awq/eval_quantized_model.py \
|