yi-01-ai
commited on
Commit
•
3aead14
1
Parent(s):
9fe47a7
Auto Sync from git://github.com/01-ai/Yi.git/commit/9bc9255729d150fd2496c1f4f65e7cd486c6c8bf
Browse files
README.md
CHANGED
@@ -150,7 +150,7 @@ pipeline_tag: text-generation
|
|
150 |
|
151 |
Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements.
|
152 |
|
153 |
-
If you want to deploy Yi models,
|
154 |
|
155 |
### Chat models
|
156 |
|
@@ -331,7 +331,7 @@ This tutorial guides you through every step of running **Yi-34B-Chat locally on
|
|
331 |
|
332 |
#### Step 0: Prerequistes
|
333 |
|
334 |
-
- Make sure Python 3.10 or later version is installed.
|
335 |
|
336 |
- If you want to run other Yi models, see [software and hardware requirements](#deployment)
|
337 |
|
@@ -833,8 +833,8 @@ python eval_quantized_model.py --model /quantized_model --trust_remote_code
|
|
833 |
<div align="right"> [ <a href="#building-the-next-generation-of-open-source-and-bilingual-llms">Back to top ⬆️ </a> ] </div>
|
834 |
|
835 |
### Deployment
|
836 |
-
|
837 |
-
|
838 |
|
839 |
#### Software requirements
|
840 |
|
@@ -845,7 +845,6 @@ Before using Yi quantized models, make sure you've installed the correct softwar
|
|
845 |
Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi)
|
846 |
Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation)
|
847 |
|
848 |
-
|
849 |
#### Hardware requirements
|
850 |
|
851 |
Before deploying Yi in your environment, make sure your hardware meets the following requirements.
|
@@ -881,12 +880,12 @@ Below are detailed minimum VRAM requirements under different batch use cases.
|
|
881 |
| Yi-34B | 72 GB | 4 x RTX 4090 <br> A800 (80 GB) |
|
882 |
| Yi-34B-200K | 200 GB | 4 x A800 (80 GB) |
|
883 |
|
884 |
-
</details>
|
885 |
-
|
886 |
### Learning hub
|
|
|
887 |
<details>
|
888 |
-
<summary>
|
889 |
<br>
|
|
|
890 |
Welcome to the Yi learning hub!
|
891 |
|
892 |
Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more.
|
@@ -897,7 +896,7 @@ At the same time, we also warmly invite you to join our collaborative effort by
|
|
897 |
|
898 |
With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳
|
899 |
|
900 |
-
|
901 |
|
902 |
| Type | Deliverable | Date | Author |
|
903 |
|-------------|--------------------------------------------------------|----------------|----------------|
|
@@ -1008,14 +1007,13 @@ If you're seeking to explore the diverse capabilities within Yi's thriving famil
|
|
1008 |
- [📊 Base model performance](#-base-model-performance)
|
1009 |
|
1010 |
### 📊 Chat model performance
|
1011 |
-
|
1012 |
-
|
1013 |
-
- Both Yi-34B-chat and its variant, Yi-34B-Chat-8bits (GPTQ), take the top spots in tests including MMLU, CMMLU, BBH, and GSM8k.
|
1014 |
|
1015 |
![Chat model performance](./assets/img/benchmark_chat.png)
|
1016 |
|
1017 |
<details>
|
1018 |
-
<summary
|
1019 |
|
1020 |
- **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA.
|
1021 |
- **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed.
|
@@ -1026,15 +1024,13 @@ If you're seeking to explore the diverse capabilities within Yi's thriving famil
|
|
1026 |
</details>
|
1027 |
|
1028 |
### 📊 Base model performance
|
1029 |
-
|
1030 |
-
- Yi-34B
|
1031 |
-
- Yi-34B ranks first in MMLU, CMMLU, BBH, and common-sense reasoning.
|
1032 |
-
- Yi-34B-200K ranks first C-Eval, GAOKAO, and reading comprehension.
|
1033 |
|
1034 |
![Base model performance](./assets/img/benchmark_base.png)
|
1035 |
|
1036 |
<details>
|
1037 |
-
<summary
|
1038 |
|
1039 |
- **Disparity in Results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass.
|
1040 |
- **Investigation Findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences.
|
|
|
150 |
|
151 |
Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements.
|
152 |
|
153 |
+
If you want to deploy Yi models, make sure you meet the [software and hardware requirements](#deployment).
|
154 |
|
155 |
### Chat models
|
156 |
|
|
|
331 |
|
332 |
#### Step 0: Prerequistes
|
333 |
|
334 |
+
- Make sure Python 3.10 or a later version is installed.
|
335 |
|
336 |
- If you want to run other Yi models, see [software and hardware requirements](#deployment)
|
337 |
|
|
|
833 |
<div align="right"> [ <a href="#building-the-next-generation-of-open-source-and-bilingual-llms">Back to top ⬆️ </a> ] </div>
|
834 |
|
835 |
### Deployment
|
836 |
+
|
837 |
+
If you want to deploy Yi models, make sure you meet the software and hardware requirements.
|
838 |
|
839 |
#### Software requirements
|
840 |
|
|
|
845 |
Yi 4-bit quantized models | [AWQ and CUDA](https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#install-from-pypi)
|
846 |
Yi 8-bit quantized models | [GPTQ and CUDA](https://github.com/PanQiWei/AutoGPTQ?tab=readme-ov-file#quick-installation)
|
847 |
|
|
|
848 |
#### Hardware requirements
|
849 |
|
850 |
Before deploying Yi in your environment, make sure your hardware meets the following requirements.
|
|
|
880 |
| Yi-34B | 72 GB | 4 x RTX 4090 <br> A800 (80 GB) |
|
881 |
| Yi-34B-200K | 200 GB | 4 x A800 (80 GB) |
|
882 |
|
|
|
|
|
883 |
### Learning hub
|
884 |
+
|
885 |
<details>
|
886 |
+
<summary> If you want to learn Yi, you can find a wealth of helpful educational resources here ⬇️</summary>
|
887 |
<br>
|
888 |
+
|
889 |
Welcome to the Yi learning hub!
|
890 |
|
891 |
Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more.
|
|
|
896 |
|
897 |
With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳
|
898 |
|
899 |
+
#### Tutorials
|
900 |
|
901 |
| Type | Deliverable | Date | Author |
|
902 |
|-------------|--------------------------------------------------------|----------------|----------------|
|
|
|
1007 |
- [📊 Base model performance](#-base-model-performance)
|
1008 |
|
1009 |
### 📊 Chat model performance
|
1010 |
+
|
1011 |
+
Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more.
|
|
|
1012 |
|
1013 |
![Chat model performance](./assets/img/benchmark_chat.png)
|
1014 |
|
1015 |
<details>
|
1016 |
+
<summary> Evaluation methods and challenges ⬇️ </summary>
|
1017 |
|
1018 |
- **Evaluation methods**: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA.
|
1019 |
- **Zero-shot vs. few-shot**: in chat models, the zero-shot approach is more commonly employed.
|
|
|
1024 |
</details>
|
1025 |
|
1026 |
### 📊 Base model performance
|
1027 |
+
|
1028 |
+
The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMML, common-sense reasoning, reading comprehension, and more.
|
|
|
|
|
1029 |
|
1030 |
![Base model performance](./assets/img/benchmark_base.png)
|
1031 |
|
1032 |
<details>
|
1033 |
+
<summary> Evaluation methods ⬇️</summary>
|
1034 |
|
1035 |
- **Disparity in Results**: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass.
|
1036 |
- **Investigation Findings**: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences.
|